[jira] [Created] (HADOOP-14739) Add build instruction for docker on Mac instead of docker toolbox.

2017-08-07 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14739:
--

 Summary: Add build instruction for docker on Mac instead of docker 
toolbox.
 Key: HADOOP-14739
 URL: https://issues.apache.org/jira/browse/HADOOP-14739
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


HADOOP-12575 added build instruction for docker toolbox.
Now Docker for Mac (https://www.docker.com/docker-mac) is available and it can 
skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-07 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14739:
---
Summary: Add build instruction for docker for Mac instead of docker 
toolbox.  (was: Add build instruction for docker on Mac instead of docker 
toolbox.)

I'm using Docker for Mac and only {{./start-build-env.sh}} is sufficient.

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-07 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14739:
---
Component/s: documentation

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-07 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14739:
---
  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-08-07 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116379#comment-16116379
 ] 

Akira Ajisaka commented on HADOOP-13714:


YARN-3254 wants to update the text information exposed via JMX.
https://issues.apache.org/jira/browse/YARN-3254?focusedCommentId=16084684&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16084684

Can we loose the rule to modify text fields when needed?

> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-13714.001.patch, HADOOP-13714.WIP-001.patch
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14738:

Affects Version/s: 2.9.0

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0, target removal in Hadoop 3.1

2017-08-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116408#comment-16116408
 ] 

Steve Loughran commented on HADOOP-14738:
-

Some ideas

* For those people who explicitly go {{fs.s3n.impl = 
org.apache.hadoop.fs.s3native.NativeS3FileSystem}}, we could retain an FS impl 
there which extended S3A and warned users off it
* Could we pick up the old key names (or at least fs.s3a equivalents) as 
deprecated values? I'm reluctant to do this, as {{Configuration.getPassword()}} 
doesn't handle deprecation.


> Deprecate S3N in hadoop 3.0, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14738:

Summary: Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1  
(was: Deprecate S3N in hadoop 3.0, target removal in Hadoop 3.1)

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-07 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116417#comment-16116417
 ] 

Akira Ajisaka commented on HADOOP-14163:


Thanks [~elek] for the comment.

bq. But I think it's important that we are speaking about two repositories. One 
for the source of the site and one for the rendered site. 
Yes, that is important. The following seems cleaner.

* the source of the site -> master branch
* the rendered site -> asf-site branch

In the near future, I'd like to use some CI/CD tool to push the rendered site 
to asf-site branch automatically when master branch has changed.

bq. Spark uses git (apache/spark-website) as the source but don't know where is 
the generated site (was here 1 year ago: https://svn.apache.org/repos/asf/spark)

Probably the generated site is 
https://github.com/apache/spark-website/tree/asf-site/site and the site is 
pushed manually 
(https://github.com/apache/spark-website/commit/ee654d1f30dbd724ee21c2b6c0afb46309118882).

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14696) parallel tests don't work for Windows

2017-08-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14696:

Status: Open  (was: Patch Available)

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696.00.patch, 
> HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part ...

[jira] [Updated] (HADOOP-14696) parallel tests don't work for Windows

2017-08-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14696:

Status: Patch Available  (was: Open)

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696-003.patch, 
> HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part ...

[jira] [Updated] (HADOOP-14696) parallel tests don't work for Windows

2017-08-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14696:

Attachment: HADOOP-14696-003.patch

patch 003 with the better diff

Allen, don't know what's up...I'll try to take another look @ this later

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696-003.patch, 
> HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part ...

[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-07 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116600#comment-16116600
 ] 

Jeff Storck commented on HADOOP-9747:
-

[~djp] [~daryn] I've tested this patch against an HDP 2.6 cluster using the 
code I've linked on HADOOP-14699, and the results are promising.  I'll be doing 
some more testing today to verify that the patch also resolves an issue with a 
single UGI that gets into a hung state where the principal has been "forgotten" 
and falls back to the login user, for which no ticket has been (or can be, in 
the tested configuration) be acquired.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14740) KMSJsonReader fails when no payload is provided

2017-08-07 Thread Lars Francke (JIRA)
Lars Francke created HADOOP-14740:
-

 Summary: KMSJsonReader fails when no payload is provided
 Key: HADOOP-14740
 URL: https://issues.apache.org/jira/browse/HADOOP-14740
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lars Francke
Priority: Minor


When using one of the {{POST}} operations (e.g. rollover a key) and not 
actually providing a JSON payload the {{KMSJsonReader}} fails with this error 
message:

{quote}No content to map due to end-of-input{quote}

I think the only method affected today might be the rollover operation but it's 
still a bug and might affect other new operations added later on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14740) KMSJsonReader fails when no payload is provided

2017-08-07 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HADOOP-14740:
--
Description: 
When using one of the {{POST}} operations (e.g. rollover a key) and not 
actually providing a JSON payload the {{KMSJsonReader}} fails with this error 
message:

{quote}No content to map due to end-of-input{quote}

For most operations this is fine as the payload is required but for others the 
payload is optional. I think the only method affected today might be the 
rollover operation but it's still a bug and might affect other new operations 
added later on.

  was:
When using one of the {{POST}} operations (e.g. rollover a key) and not 
actually providing a JSON payload the {{KMSJsonReader}} fails with this error 
message:

{quote}No content to map due to end-of-input{quote}

I think the only method affected today might be the rollover operation but it's 
still a bug and might affect other new operations added later on.


> KMSJsonReader fails when no payload is provided
> ---
>
> Key: HADOOP-14740
> URL: https://issues.apache.org/jira/browse/HADOOP-14740
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lars Francke
>Priority: Minor
>
> When using one of the {{POST}} operations (e.g. rollover a key) and not 
> actually providing a JSON payload the {{KMSJsonReader}} fails with this error 
> message:
> {quote}No content to map due to end-of-input{quote}
> For most operations this is fine as the payload is required but for others 
> the payload is optional. I think the only method affected today might be the 
> rollover operation but it's still a bug and might affect other new operations 
> added later on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-07 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116674#comment-16116674
 ] 

Elek, Marton commented on HADOOP-14163:
---

Yes, I also checked other projects, for example 
https://github.com/apache/incubator-openwhisk-website uses the same pattern: 
asf-site is the rendered, master is the source.

I think it's a good pattern, even if it needs multiple checkout during the 
release process (the master should be checked out to add the announcement 
entry, the asf-site should be checked out to add the raw javadocs).

I think we need 2 INFRA requests:

1. to create the hadoop-site repository
2. to create a configuration to refresh the https://hadoop.apache.org with the 
content of the asf-site branch of the hadoop-site repository.  
(+1. later: to create a job which automatically commits the new rendered site 
from master -> asf-site, as you wrote).

I am not sure if I can open these tickets as a simple contributor, but let me 
know if I can do anything to achive this. 

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116697#comment-16116697
 ] 

Hadoop QA commented on HADOOP-14696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880639/HADOOP-14696-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 2cdb6aa78fb4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0b67436 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12968/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12968/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test

[jira] [Commented] (HADOOP-13430) Optimize and fix getFileStatus in S3A

2017-08-07 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116700#comment-16116700
 ] 

Yonger commented on HADOOP-13430:
-

[~ste...@apache.org] I really can't see any optimization for getFileStatus in 
ticket [https://issues.apache.org/jira/browse/HADOOP-13208] , with my 
understand, there is still exist 2 head +1 list operations in this function. 

And I scan the code where call getFileStatus, and found that in some place, we 
can know the path that input is a file or a directory, e.g. in create and open 
function, we don't need to check the path is a directory with calling 
getfileStatus, just consider the path is a file(according the implementation of 
Presto), thus when calling getFileStatus, we know the input path is a file so 
that it don't need to call getmetadata method again with suffix "/".

Totally, can we reduce the s3 call over network as possible by tell 
getFileStatus the path is a file or directory explicitly? 

> Optimize and fix getFileStatus in S3A
> -
>
> Key: HADOOP-13430
> URL: https://issues.apache.org/jira/browse/HADOOP-13430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Steven K. Wong
>Priority: Minor
> Attachments: HADOOP-13430.001.WIP.patch
>
>
> Currently, S3AFileSystem.getFileStatus(Path f) sends up to 3 requests to S3 
> when pathToKey(f) = key = "foo/bar" is a directory:
> 1. HEAD key=foo/bar \[continue if not found]
> 2. HEAD key=foo/bar/ \[continue if not found]
> 3. LIST prefix=foo/bar/ delimiter=/ max-keys=1
> My experience (and generally true, I reckon) is that almost all directories 
> are nonempty directories without a "fake directory" file (e.g. "foo/bar/"). 
> Under this condition, request #2 is mostly unhelpful; it only slows down 
> getFileStatus. Therefore, I propose swapping the order of requests #2 and #3. 
> The swapped HEAD request will be skipped in practically all cases.
> Furthermore, when key = "foo/bar" is a nonempty directory that contains a 
> "fake directory" file (in addition to actual files), getFileStatus currently 
> returns an S3AFileStatus with isEmptyDirectory=true, which is wrong. Swapping 
> will fix this. The swapped LIST request will use max-keys=2 to determine 
> isEmptyDirectory correctly. (Removing the delimiter from the LIST request 
> should make the logic a little simpler than otherwise.)
> Note that key = "foo/bar/" has the same problem with isEmptyDirectory. To fix 
> it, I propose skipping request #1 when key ends with "/". The price is this 
> will, for an empty directory, replace a HEAD request with a LIST request 
> that's generally more taxing on S3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-08-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116800#comment-16116800
 ] 

Allen Wittenauer commented on HADOOP-14696:
---


Keep in mind that hadoop-aws/pom.xml (effectively) sets the hadoop.tmp.dir to 
the same as test.build.data.  So there's only two dirs being created:

{code}
[INFO] --- maven-antrun-plugin:1.7:run (define-parallel-tests-dirs) @ 
hadoop-aws ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-parallel-tests-dirs) @ 
hadoop-aws ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/testptch/hadoop/hadoop-tools/hadoop-aws/target/test-dir/1
[mkdir] Created dir: 
/testptch/hadoop/hadoop-tools/hadoop-aws/target/test-dir/2
[mkdir] Created dir: 
/testptch/hadoop/hadoop-tools/hadoop-aws/target/test-dir/3
[mkdir] Created dir: 
/testptch/hadoop/hadoop-tools/hadoop-aws/target/test-dir/4
[mkdir] Created dir: /testptch/hadoop/hadoop-tools/hadoop-aws/target/test/1
[mkdir] Created dir: /testptch/hadoop/hadoop-tools/hadoop-aws/target/test/2
[mkdir] Created dir: /testptch/hadoop/hadoop-tools/hadoop-aws/target/test/3
[mkdir] Created dir: /testptch/hadoop/hadoop-tools/hadoop-aws/target/test/4
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aws ---
{code}

(from 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12968/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt)

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696-003.patch, 
> HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part ...

[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116836#comment-16116836
 ] 

Daryn Sharp commented on HADOOP-9747:
-

bq.  Do you have specific test plan or this is already get tested in Yahoo! 
production cluster?

We've waiting on community so we don't integrate something that causes massive 
conflicts going forward.  I'll push to deploy internally if it's what it will 
take to move forward.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116893#comment-16116893
 ] 

Xiao Chen commented on HADOOP-14727:


+1, committing this

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch, 
> HADOOP-14727.002.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> 

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14727:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2.
Compiled and ran TestConfiguration on branch-2 before pushing.

Thanks [~haibochen] for the initial report, [~jeagles] for the fix and 
[~ste...@apache.org] for commenting!

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch, 
> HADOOP-14727.002.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.

[jira] [Updated] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-07 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14739:
---
Labels: newbie  (was: )

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116960#comment-16116960
 ] 

Steve Loughran commented on HADOOP-14727:
-

thank you for finding it and tracking down the problem!

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch, 
> HADOOP-14727.002.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> c

[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14715:
-
Attachment: HADOOP-14715-002.patch

Updated the patch to handle scenarios where authorization caching is 
enabled/disabled and set secure mode to be disabled by default.

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117019#comment-16117019
 ] 

Andrew Wang commented on HADOOP-14730:
--

We don't require ABI compatibility between Hadoop 2 and 3, and I don't think 
mixing versions of the Hadoop JARs on a client classpath is supported either.

However, if it's easy to support and useful, we might as well.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117050#comment-16117050
 ] 

Mingliang Liu commented on HADOOP-14738:


I'm with the proposal.

{quote}
For those people who explicitly go fs.s3n.impl = 
org.apache.hadoop.fs.s3native.NativeS3FileSystem, we could retain an FS impl 
there which extended S3A and warned users off it
{quote}
This is good idea.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117055#comment-16117055
 ] 

Thomas Marquardt commented on HADOOP-14715:
---

*TestWasbRemoteCallHelper.java*:
  *L62* - I recommend referencing  
*CachingAuthorizer.KEY_AUTH_SERVICE_CACHING_ENABLE* instead of declaring a new 
variable.
  
  *L362* - Duplication can be minimized, for example:

  *int expectedNumberOfInvocations = (isAuthorizationCachingEnabled) ? 1 : 
2;*

  // then call *times(expectedNumberOfInvocations)*

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-07 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117108#comment-16117108
 ] 

Elek, Marton commented on HADOOP-14739:
---

There is also a boot2docker specific part in {{./start-build-env.sh}}

{code}
if [ "$(uname -s)" == "Linux" ]; then
  USER_NAME=${SUDO_USER:=$USER}
  USER_ID=$(id -u "${USER_NAME}")
  GROUP_ID=$(id -g "${USER_NAME}")
else # boot2docker uid and gid
  USER_NAME=$USER
  USER_ID=1000
  GROUP_ID=50
fi
{code}

Maybe it also should be updated. Is it necessery? Is your user id 1000 or 
different?

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-07 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117109#comment-16117109
 ] 

Ming Ma commented on HADOOP-12077:
--

There are couple fixes either we need or have been committed in open source.

* Make Trash work for nfly. Given our internal patch requires changes in 
FileSystem APIs and thus likely requires more discussion, I recommend opening a 
separate jira.
* Fix the cross-DC write performance issue. HDFS-9259 is already in upstream.
* Pick the closest FileSystem as the preferred location for read. All required 
fixes are already in upstream. Specifically it needs HDFS-10206 and the change 
in NetworkTopology class from HDFS-7647.

Thus this patch as it is has covered the main functionality. We can commit this 
patch after fixing checkstyle etc and then use separate jiras for follow-up for 
major issues. Thoughts?

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly fir

[jira] [Created] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Subru Krishnan (JIRA)
Subru Krishnan created HADOOP-14741:
---

 Summary: Refactor curator based ZooKeeper communication into 
common library
 Key: HADOOP-14741
 URL: https://issues.apache.org/jira/browse/HADOOP-14741
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Subru Krishnan
Assignee: Inigo Goiri


Currently we have ZooKeeper based store implementations for multiple state 
stores like RM, YARN Federation, HDFS router-based federation, RM queue configs 
etc. This jira proposes to unify the curator based ZK communication to 
eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117245#comment-16117245
 ] 

Rushabh S Shah commented on HADOOP-14705:
-

Thanks [~xiaochen] for working on this.
Seems like a good change.

I have some review comments from the client side.
Will review server side changes later.

1.  If I understand the patch correctly, this server request will be either All 
or  None.
I haven't seen HDFS-10899, so I might miss some context.
Either all the EDEKs within the list will be re-encrypted or if anyone of ekv 
encounters Exception,  then the whole call will fail.
It would be better if we don't fail if one EDEK fails to process.
We can return Map. In case of failed operations, 
we can just set null and traverse the map on the client side to see which ekv's 
failed.

2. +KeyProviderCryptoExtension.java+
 Instead of creating {{CryptoCodec}} and {{Encryptor}} for every ekv, we can 
create it just once.
I don't think it stores some state related to each {{ekv}}.
Also  we need to close {{CryptoCodec}} instance.
I would use try with resources block.

3.  +KMSClientProvider.java+
In {{reencryptEncryptedKeys}} method, we need to add null check also along with 
zero length array.
if (ekvs == null || ekvs.isEmpty())

4. +KMSUtil.java+
{{toJSON(EncryptedKeyVersion encryptedKeyVersion)}} and 
{{toJSON(KeyProvider.KeyVersion keyVersion)}}
Why do we need to use LinkedHashMap ?
I don't think in the server side, while iterating over the map, we care about 
the insertion order.

{{toJSON(EncryptedKeyVersion encryptedKeyVersion, String keyName)}}
Will the keyName will ever be null ?
We do a "not null" check in the calling method 
{{KMSClientProvider#reencryptEncryptedKeys(List ekvs)}}

{{checkNotNull}} and {{checkNotEmpty}}
Both of these methods are actually utility methods. I would rather see this in 
KMSUtil instead of KMSClientProvider


> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12143) Add a style guide to the Hadoop documentation

2017-08-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-12143:
--
Description: 
We don't have a documented style guide for the Hadoop source or its tests other 
than "use the Java rules with two spaces". 

That doesn't cover policy like
# exception handling
# logging
# metrics
# what makes a good test
# why features that have O\(n\) or worse complexity or extra memory load on the 
NN & RM Are "unwelcome",
# ... etc

We have those in our heads, and we reject patches for not following them —but 
as they aren't written down, how can we expect new submitters to follow them, 
or back up our vetos with a policy to point at.

I propose having an up to date style guide which defines the best practises we 
expect for new codes. That can be stricter than the existing codebase: we want 
things to improve.

  was:
We don't have a documented style guide for the Hadoop source or its tests other 
than "use the Java rules with two spaces". 

That doesn't cover policy like
# exception handling
# logging
# metrics
# what makes a good test
# why features that have O(n) or worse complexity or extra memory load on the 
NN & RM Are "unwelcome",
# ... etc

We have those in our heads, and we reject patches for not following them —but 
as they aren't written down, how can we expect new submitters to follow them, 
or back up our vetos with a policy to point at.

I propose having an up to date style guide which defines the best practises we 
expect for new codes. That can be stricter than the existing codebase: we want 
things to improve.


> Add a style guide to the Hadoop documentation
> -
>
> Key: HADOOP-12143
> URL: https://issues.apache.org/jira/browse/HADOOP-12143
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> We don't have a documented style guide for the Hadoop source or its tests 
> other than "use the Java rules with two spaces". 
> That doesn't cover policy like
> # exception handling
> # logging
> # metrics
> # what makes a good test
> # why features that have O\(n\) or worse complexity or extra memory load on 
> the NN & RM Are "unwelcome",
> # ... etc
> We have those in our heads, and we reject patches for not following them —but 
> as they aren't written down, how can we expect new submitters to follow them, 
> or back up our vetos with a policy to point at.
> I propose having an up to date style guide which defines the best practises 
> we expect for new codes. That can be stricter than the existing codebase: we 
> want things to improve.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12143) Add a style guide to the Hadoop documentation

2017-08-07 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117267#comment-16117267
 ] 

Arpit Agarwal commented on HADOOP-12143:


Someone posted this link on common-dev recently:
http://cr.openjdk.java.net/~alundblad/styleguide/index-v6.html

Looks like a much needed update to the antique Java style guide.

> Add a style guide to the Hadoop documentation
> -
>
> Key: HADOOP-12143
> URL: https://issues.apache.org/jira/browse/HADOOP-12143
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> We don't have a documented style guide for the Hadoop source or its tests 
> other than "use the Java rules with two spaces". 
> That doesn't cover policy like
> # exception handling
> # logging
> # metrics
> # what makes a good test
> # why features that have O\(n\) or worse complexity or extra memory load on 
> the NN & RM Are "unwelcome",
> # ... etc
> We have those in our heads, and we reject patches for not following them —but 
> as they aren't written down, how can we expect new submitters to follow them, 
> or back up our vetos with a policy to point at.
> I propose having an up to date style guide which defines the best practises 
> we expect for new codes. That can be stricter than the existing codebase: we 
> want things to improve.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117323#comment-16117323
 ] 

Chris Douglas commented on HADOOP-14730:


bq. Verified v004 did work in this use case: 3.0.0b1 hadoop-azure-datalake jar 
dropped into a 2.9.0 cluster. However, is the use case supported? Is the 
maintenance cost over benefit?
Thanks for testing this, John. I'll fix the other issues and upload a new patch.

bq. The inheritance of AdlFileStatus from FileStatus introduces tight coupling 
between these 2 classes, although AdlFileStatus is pretty simple at this point.
It's hard to do better, unfortunately. Particularly since we deprecated, but 
still support the {{Writable}} behavior in 3.x.

bq. However, if it's easy to support and useful, we might as well.
If it's linked against a 2.x client, this will drop the {{hasAcl}} field using 
{{Writable}} serialization. That... may be fixable, but 
{{FsPermissionExtension}} looks like it discards it, anyway. I'd ignore this 
case.

Maintaining this kind of backward-compatibility is ultimately not viable, but 
we can sustain the complexity added by v004. I'll fix the issues [~jzhuge] 
pointed out and we can wrap this up.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14730:
---
Attachment: HADOOP-14730.005.patch

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2017-08-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117372#comment-16117372
 ] 

Andrew Wang commented on HADOOP-10724:
--

Is this still targeted at beta1?

> `hadoop fs -du -h` incorrectly formatted
> 
>
> Key: HADOOP-10724
> URL: https://issues.apache.org/jira/browse/HADOOP-10724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Sam Steingold
>Assignee: Sam Steingold
> Attachments: 
> 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
>
>
> {{hadoop fs -du -h}} prints sizes with a space between the number and the 
> unit:
> {code}
> $ hadoop fs -du -h . 
> 91.7 G   
> 583.1 M  
> 97.6 K   .
> {code}
> The standard unix {{du -h}} does not:
> {code}
> $ du -h
> 400K...
> 404K
> 480K.
> {code}
> the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
> while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.
> Please see 
> * [sort|http://linux.die.net/man/1/sort]: "-h --human-numeric-sort
> compare human readable numbers (e.g., 2K 1G) "
> * [du|http://linux.die.net/man/1/du]: "-h, --human-readable
> print sizes in human readable format (e.g., 1K 234M 2G) "



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117376#comment-16117376
 ] 

Hadoop QA commented on HADOOP-14730:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-tools_hadoop-azure-datalake generated 4 new + 5 unchanged 
- 6 fixed = 9 total (was 11) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} hadoop-tools/hadoop-azure-datalake: The patch 
generated 0 new + 0 unchanged - 20 fixed = 0 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880712/HADOOP-14730.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 57ecbf3ac5ae 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / adb84f3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12969/artifact/patchprocess/diff-compile-javac-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12969/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12969/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type:

[jira] [Updated] (HADOOP-11790) leveldb usage should be disabled by default or smarter about platforms

2017-08-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11790:
-
Priority: Major  (was: Blocker)

I'm dropping this to a major for now.

> leveldb usage should be disabled by default or smarter about platforms
> --
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-alpha3
> Environment: * any non-x86
> * any OS that isn't Linux, OSX, Windows
>Reporter: Ayappan
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117395#comment-16117395
 ] 

Hadoop QA commented on HADOOP-10724:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-10724 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-10724 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12728878/0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12970/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> `hadoop fs -du -h` incorrectly formatted
> 
>
> Key: HADOOP-10724
> URL: https://issues.apache.org/jira/browse/HADOOP-10724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Sam Steingold
>Assignee: Sam Steingold
> Attachments: 
> 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
>
>
> {{hadoop fs -du -h}} prints sizes with a space between the number and the 
> unit:
> {code}
> $ hadoop fs -du -h . 
> 91.7 G   
> 583.1 M  
> 97.6 K   .
> {code}
> The standard unix {{du -h}} does not:
> {code}
> $ du -h
> 400K...
> 404K
> 480K.
> {code}
> the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
> while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.
> Please see 
> * [sort|http://linux.die.net/man/1/sort]: "-h --human-numeric-sort
> compare human readable numbers (e.g., 2K 1G) "
> * [du|http://linux.die.net/man/1/du]: "-h, --human-readable
> print sizes in human readable format (e.g., 1K 234M 2G) "



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-000.patch

Proposal for refactor.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Inigo Goiri
> Attachments: HADOOP-14741-000.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117406#comment-16117406
 ] 

Inigo Goiri commented on HADOOP-14741:
--

The main problem about unifying is that with the current proposal, all the 
curators are configured with the same parameters (i.e., same ZK address). 
Personally, we are OK with that but I understand there might be other 
deployments with finer-grain requirements.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Inigo Goiri
> Attachments: HADOOP-14741-000.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14715:
-
Attachment: HADOOP-14715-003.patch

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117441#comment-16117441
 ] 

Esfandiar Manii edited comment on HADOOP-14715 at 8/7/17 10:26 PM:
---

Updated wrt comments from Thomas


was (Author: esmanii):
Updated comments from Thomas

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117441#comment-16117441
 ] 

Esfandiar Manii commented on HADOOP-14715:
--

Updated comments from Thomas

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2017-08-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10724:
-
Status: Open  (was: Patch Available)

> `hadoop fs -du -h` incorrectly formatted
> 
>
> Key: HADOOP-10724
> URL: https://issues.apache.org/jira/browse/HADOOP-10724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Sam Steingold
>Assignee: Sam Steingold
> Attachments: 
> 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
>
>
> {{hadoop fs -du -h}} prints sizes with a space between the number and the 
> unit:
> {code}
> $ hadoop fs -du -h . 
> 91.7 G   
> 583.1 M  
> 97.6 K   .
> {code}
> The standard unix {{du -h}} does not:
> {code}
> $ du -h
> 400K...
> 404K
> 480K.
> {code}
> the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
> while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.
> Please see 
> * [sort|http://linux.die.net/man/1/sort]: "-h --human-numeric-sort
> compare human readable numbers (e.g., 2K 1G) "
> * [du|http://linux.die.net/man/1/du]: "-h, --human-readable
> print sizes in human readable format (e.g., 1K 234M 2G) "



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117455#comment-16117455
 ] 

Allen Wittenauer commented on HADOOP-14738:
---

Why not just remove it in 3.0?

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-07 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117486#comment-16117486
 ] 

Thomas Marquardt commented on HADOOP-14715:
---

Looks good. +1

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117497#comment-16117497
 ] 

John Zhuge commented on HADOOP-14730:
-

Thanks [~chris.douglas]. I am ok if both [~vishwajeet.dusane] and you found 2.x 
linkage useful.

AdlFileStatus#AdlFileStatus() is not used.
AdlFileStatus#hashCode is probably ok not to involve {{hasAcl}}, don't you 
think {{equals}} should return false when {{hasAcl}} are different in 2 objects?

A typical implementation of these 2 methods may look like this:
{noformat}
  @Override
  public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
if (!super.equals(o)) return false;
AdlFileStatus that = (AdlFileStatus) o;
return hasAcl == that.hasAcl;
  }

  @Override
  public int hashCode() {
return Objects.hash(super.hashCode(), hasAcl);
  }
{noformat}

See 

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117497#comment-16117497
 ] 

John Zhuge edited comment on HADOOP-14730 at 8/7/17 10:53 PM:
--

Thanks [~chris.douglas]. I am ok if both [~vishwajeet.dusane] and you found 2.x 
linkage useful.

AdlFileStatus#AdlFileStatus() is not used.
AdlFileStatus#hashCode is probably ok not to involve {{hasAcl}}, don't you 
think {{equals}} should return false when {{hasAcl}} are different in 2 objects?

A typical implementation of these 2 methods may look like this:
{noformat}
  @Override
  public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
if (!super.equals(o)) return false;
AdlFileStatus that = (AdlFileStatus) o;
return hasAcl == that.hasAcl;
  }

  @Override
  public int hashCode() {
return Objects.hash(super.hashCode(), hasAcl);
  }
{noformat}


was (Author: jzhuge):
Thanks [~chris.douglas]. I am ok if both [~vishwajeet.dusane] and you found 2.x 
linkage useful.

AdlFileStatus#AdlFileStatus() is not used.
AdlFileStatus#hashCode is probably ok not to involve {{hasAcl}}, don't you 
think {{equals}} should return false when {{hasAcl}} are different in 2 objects?

A typical implementation of these 2 methods may look like this:
{noformat}
  @Override
  public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
if (!super.equals(o)) return false;
AdlFileStatus that = (AdlFileStatus) o;
return hasAcl == that.hasAcl;
  }

  @Override
  public int hashCode() {
return Objects.hash(super.hashCode(), hasAcl);
  }
{noformat}

See 

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117517#comment-16117517
 ] 

Xiao Chen commented on HADOOP-14705:


Thanks for reviewing [~shahrs87]. Will address other comments, but want to 
discuss on these 2:
bq. 1. It would be better if we don't fail if one EDEK fails to process.
Also thought about this, but didn't do for these reasons:
- it's hard to throw a clear exception to tell what went wrong on the failures. 
(i.e. throw first, or last, or some aggregated exception that contains all?)
- I'm not sure if a partially re-encrypted batch is useful to the clients. For 
namenode, it'll make tracking harder so NN will call again.
- Complexity on both the server (need to catch exception in the middle and 
continue the rest of the keys, potentially keeping the some of the exceptions 
for the final throw) and client (need to look up each key and logically handle 
differently, depending on whether the return is null)

bq. 4. KMSUtil.java Why do we need to use LinkedHashMap ?
I'm also not 100% sure why LinkedHashMap is required, probably not.
This patch is just moving the existing util methods, so I'd like to leave this 
change to a new jira for cleanness.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117527#comment-16117527
 ] 

Subru Krishnan commented on HADOOP-14741:
-

Thanks [~goiri] for working on this. I am fine with merging the configurations 
as:
* I feel *strongly* that we already have way too many config knobs already.
* From what I have observed in production deployments, RMs and NNs are deployed 
on different physical machines so they could still have different values (i.e. 
role based configuration). Additionally, even in same machines you can set the 
hadoop confs per process.

Regarding the patch itself, I just have a few minor comments:
* {{CuratorManager}} --> {{ZKCuratorManager}} (or {{CuratorZKManager}}).
* The keys should be defined in {{CommonConfigurationKeys}} so that they can be 
overridden via _core-site.xml_.
* Can you add *exists* check before and after we do an op in the tests.
* Nit: some public methods are missing Javadocs.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14730:
---
Attachment: HADOOP-14730.006.patch

bq. AdlFileStatus#AdlFileStatus() is not used.
Updated patch.

bq. AdlFileStatus#hashCode is probably ok not to involve hasAcl, don't you 
think equals should return false when hasAcl are different in 2 objects?
{{FileStatus::equals}} only checks that the {{Path}} matches, so ignoring the 
{{hasAcl}} field is consistent.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Status: Patch Available  (was: Open)

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12077:
---
Attachment: HADOOP-12077.008.patch

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts the set of most recent URIs topologically in the same manner as 
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
> underlying destinations. With repairOnRead, Nfly filesystem would 
> additionally attempt to refresh destinations with the path missing or a stale 
> version of the path using the nearest available most recent destination. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache

[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117566#comment-16117566
 ] 

Hadoop QA commented on HADOOP-14730:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-tools_hadoop-azure-datalake generated 4 new + 5 unchanged 
- 6 fixed = 9 total (was 11) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} hadoop-tools/hadoop-azure-datalake: The patch 
generated 0 new + 0 unchanged - 20 fixed = 0 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880731/HADOOP-14730.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2dc8cc7f48d8 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bc20680 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12971/artifact/patchprocess/diff-compile-javac-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12971/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12971/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type:

[jira] [Commented] (HADOOP-14418) Confusing failure stack trace when codec fallback is happend

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117571#comment-16117571
 ] 

Wei-Chiu Chuang commented on HADOOP-14418:
--

I believe this one is a dup of HDFS-12094.

Now that HDFS-12094 was committed, I'm going to resolve this one as a dup.

> Confusing failure stack trace when codec fallback is happend
> 
>
> Key: HADOOP-14418
> URL: https://issues.apache.org/jira/browse/HADOOP-14418
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HADOOP-14418.01.patch, HADOOP-14418.02.patch
>
>
> When erasure codec is fallbacked, all stacktrace is shown to client.
> {code}
> root@990705591ccc:/usr/local/hadoop# bin/hadoop fs -put README.txt /ec
> 17/05/13 08:23:46 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/05/13 08:23:47 WARN erasurecode.CodecUtil: Failed to create raw erasure 
> encoder rs_native, fallback to next codec if possible
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory.createEncoder(NativeRSRawErasureCoderFactory.java:35)
>   at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:173)
>   at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:302)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:309)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1214)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1193)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1131)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:449)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:446)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:460)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:181)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1074)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1054)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:943)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.create(CommandWithDestination.java:509)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:484)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:286)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.RuntimeException: hadoop native library cannot be loaded.
>   at 
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.checkNativeCodeLoaded(ErasureCodeNative.java:69)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawEncoder.(NativeRSRawEncoder.

[jira] [Resolved] (HADOOP-14418) Confusing failure stack trace when codec fallback is happend

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-14418.
--
Resolution: Duplicate

Thanks [~lewuathe] for the work here!

> Confusing failure stack trace when codec fallback is happend
> 
>
> Key: HADOOP-14418
> URL: https://issues.apache.org/jira/browse/HADOOP-14418
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HADOOP-14418.01.patch, HADOOP-14418.02.patch
>
>
> When erasure codec is fallbacked, all stacktrace is shown to client.
> {code}
> root@990705591ccc:/usr/local/hadoop# bin/hadoop fs -put README.txt /ec
> 17/05/13 08:23:46 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/05/13 08:23:47 WARN erasurecode.CodecUtil: Failed to create raw erasure 
> encoder rs_native, fallback to next codec if possible
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory.createEncoder(NativeRSRawErasureCoderFactory.java:35)
>   at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:173)
>   at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:302)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:309)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1214)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1193)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1131)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:449)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:446)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:460)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:181)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1074)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1054)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:943)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.create(CommandWithDestination.java:509)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:484)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:286)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.RuntimeException: hadoop native library cannot be loaded.
>   at 
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.checkNativeCodeLoaded(ErasureCodeNative.java:69)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawEncoder.(NativeRSRawEncoder.java:33)
>   ... 36 more
> root@990705591ccc:/usr/local/hadoop#
> {code}
> This message is confusing to us

[jira] [Created] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2017-08-07 Thread Chris Douglas (JIRA)
Chris Douglas created HADOOP-14742:
--

 Summary: Document multi-URI replication Inode for ViewFS
 Key: HADOOP-14742
 URL: https://issues.apache.org/jira/browse/HADOOP-14742
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation, viewfs
Affects Versions: 3.0.0-beta1
Reporter: Chris Douglas


HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-07 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117575#comment-16117575
 ] 

Chris Douglas commented on HADOOP-12077:


bq. this patch as it is has covered the main functionality. We can commit this 
patch after fixing checkstyle etc and then use separate jiras for follow-up for 
major issues. Thoughts?
The unit tests around ViewFS are pretty thorough, so I'm reasonably sure this 
won't cause a regression. We can handle the other issues as followup. Would it 
be difficult to verify that basic operations work as expected in your 
environment?

Filed HADOOP-14742 to update the ViewFS 
[docs|http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-hdfs/ViewFs.html].

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts 

[jira] [Created] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14743:


 Summary: CompositeGroupsMapping should not swallow exceptions
 Key: HADOOP-14743
 URL: https://issues.apache.org/jira/browse/HADOOP-14743
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


{code:title=CompositeGroupsMapping#getGroups}
   for (GroupMappingServiceProvider provider : providersList) {
  try {
groups = provider.getGroups(user);
  } catch (Exception e) {
//LOG.warn("Exception trying to get groups for user " + user, e);  
  }
  if (groups != null && ! groups.isEmpty()) {
groupSet.addAll(groups);
if (!combined) break;
  }
}
{code}
If anything fails inside the underlying groups mapping service provider, 
there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14743:
-
Attachment: HADOOP-14743.001.patch

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117597#comment-16117597
 ] 

Wei-Chiu Chuang commented on HADOOP-14743:
--

Attach patch rev 001. This patch prints a warning message, and a more detailed 
stacktrace is printed at debug log level.

After this patch, the CompositeGroupsMapping would record the following log:
{noformat}
2017-08-07 17:05:31,696 WARN  security.CompositeGroupsMapping 
(CompositeGroupsMapping.java:getGroups(77)) - Unable to get groups for user 
Jack via UserProvider because: java.io.IOException: foo
2017-08-07 17:05:31,708 DEBUG security.CompositeGroupsMapping 
(CompositeGroupsMapping.java:getGroups(79)) - Stacktrace: 
java.io.IOException: foo
at 
org.apache.hadoop.security.TestCompositeGroupMapping$UserProvider.getGroups(TestCompositeGroupMapping.java:120)
at 
org.apache.hadoop.security.CompositeGroupsMapping.getGroups(CompositeGroupsMapping.java:75)
at 
org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
at 
org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
at 
org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)

{noformat}

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14743:
-
Status: Patch Available  (was: Open)

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14743:
-
Attachment: HADOOP-14743.002.patch

Submit patch rev 002 to use the slf4j's curl bracket style parameters

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14743:
-
Affects Version/s: 2.5.0

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14743:
-
Target Version/s: 2.9.0, 3.0.0-beta1, 2.8.2, 2.7.5
 Component/s: security

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-001.patch

Tackling [~subru] comments.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.02.patch

Thanks for the reviews, [~andrew.wang]

bq. be invoked . 
bq. there's a typo in the builder doc: "recurisve"
bq. Would be better to use an fake "FooFileSystem"

Fixed. 

bq. a behavior change compared to the current create APIs,
bq. Should we also call out the change in default behavior compared to the 
existing create call?

{{overwrite}} is the same as {{FS#create}}. while {{recursive()}} is changed. 
Modified in the doc. I think they are the only changes. 

bq. Are there provisions for probing FS capabilities without must 

It does not have this capability now. We can discuss it in follow on JIRAs.

bq. move the HDFS-specific builder parameters to an HDFS-specific page

Nice suggestions. There are a few places in {{ilesystem.md}} mentioned HDFS 
special cases, and I did not find a good and existing place to put this into 
HDFS section in the doc.  Besides, it offers the single place for user to look 
up what the Builder is capable of. Shall we put it here?

Thanks!



> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117684#comment-16117684
 ] 

Hadoop QA commented on HADOOP-14398:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880756/HADOOP-14398.02.patch 
|
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 68e93ab5051b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c61f2c4 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12976/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117683#comment-16117683
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  4m  
9s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m  9s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 8 new + 246 unchanged 
- 3 fixed = 254 total (was 249) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 348 unchanged - 4 fixed = 348 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
|   | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880717/HADOOP-14741-000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Lin

[jira] [Commented] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117693#comment-16117693
 ] 

Hadoop QA commented on HADOOP-14743:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880745/HADOOP-14743.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e440ffadc2a7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c61f2c4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12974/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12974/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMappi

[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117697#comment-16117697
 ] 

John Zhuge commented on HADOOP-14730:
-

+1 for 006. Thanks for the patience and the great work.

javac warnings are expected.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) Support protobuf FileStatus in ADLS

2017-08-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14730:

Summary: Support protobuf FileStatus in ADLS  (was: hasAcl property always 
set to false, regardless of FsPermission higher bit order )

> Support protobuf FileStatus in ADLS
> ---
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) Support protobuf FileStatus in ADLS

2017-08-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117704#comment-16117704
 ] 

John Zhuge commented on HADOOP-14730:
-

Took a crack at the jira summary.

> Support protobuf FileStatus in ADLS
> ---
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) Support protobuf FileStatus in AdlFileSystem

2017-08-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14730:

Summary: Support protobuf FileStatus in AdlFileSystem  (was: Support 
protobuf FileStatus in ADLS)

> Support protobuf FileStatus in AdlFileSystem
> 
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117733#comment-16117733
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
55s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 55s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} root: The patch generated 0 new + 320 unchanged - 3 
fixed = 320 total (was 323) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 348 unchanged - 4 fixed = 348 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880749/HADOOP-14741-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 62c8d2fa5479 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 U

[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-002.patch

Fixing compilation issues.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117760#comment-16117760
 ] 

Hadoop QA commented on HADOOP-12077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
16s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 39s{color} 
| {color:red} root generated 1 new + 1379 unchanged - 0 fixed = 1380 total (was 
1379) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 1 new + 154 unchanged 
- 9 fixed = 155 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-12077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880734/HADOOP-12077.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 12dfaeb63f74 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bc20680 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12973/artifact/patchprocess

[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.03.patch

Patch 3 to address all other comments from [~shahrs87].
Also moved more static functions to {{KMSUtil}}, and added checks for nullity 
on the params - I think it makes sense to do the additional check now that it's 
moved to util. Existing behaviors of these methods are not changed.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Status: Patch Available  (was: Open)

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117786#comment-16117786
 ] 

Xiao Chen edited comment on HADOOP-14705 at 8/8/17 3:29 AM:


Patch 3 to address all other comments from [~shahrs87].
Also moved more static functions to {{KMSUtil}}, and added checks for nullity 
on the params - I think it makes sense to do the additional check now that it's 
moved to util. Existing behaviors of these methods are otherwise not changed.


was (Author: xiaochen):
Patch 3 to address all other comments from [~shahrs87].
Also moved more static functions to {{KMSUtil}}, and added checks for nullity 
on the params - I think it makes sense to do the additional check now that it's 
moved to util. Existing behaviors of these methods are not changed.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) Support protobuf FileStatus in AdlFileSystem

2017-08-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14730:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review, [~jzhuge]. I committed this.

> Support protobuf FileStatus in AdlFileSystem
> 
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117844#comment-16117844
 ] 

Hadoop QA commented on HADOOP-14705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project: The patch generated 6 new 
+ 151 unchanged - 2 fixed = 157 total (was 153) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Nullcheck of keyName at line 118 of value previously dereferenced in 
org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion,
 String)  At KMSUtil.java:118 of value previously dereferenced in 
org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion,
 String)  At KMSUtil.java:[line 116] |
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880779/HADOOP-14705.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 51848e681b8a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
|

[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117856#comment-16117856
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} root: The patch generated 0 new + 321 unchanged - 3 
fixed = 321 total (was 324) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 348 unchanged - 4 fixed = 348 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
|   | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
|   | hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels |
|   | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
|   | hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | hadoop.yarn.server.resourcemanager.TestReservationSystemW

[jira] [Created] (HADOOP-14744) RM and NM killed automatically

2017-08-07 Thread Manish (JIRA)
Manish created HADOOP-14744:
---

 Summary: RM and NM killed automatically
 Key: HADOOP-14744
 URL: https://issues.apache.org/jira/browse/HADOOP-14744
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 2.6.0
 Environment: We have Apache hadoop setup with Zookeeper enabled.
Reporter: Manish


We have Apache hadoop setup with Zookeeper enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.03.patch

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: (was: HADOOP-14705.03.patch)

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14726) Remove FileStatus#isDir

2017-08-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
Attachment: HADOOP-14726.002.patch

OK. v002 makes {{isDir}} final.

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14744) RM and NM killed automatically

2017-08-07 Thread Manish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manish updated HADOOP-14744:

Attachment: RM.txt

Resource Manager log 

> RM and NM killed automatically
> --
>
> Key: HADOOP-14744
> URL: https://issues.apache.org/jira/browse/HADOOP-14744
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.6.0
> Environment: We have Apache hadoop setup with Zookeeper enabled.
>Reporter: Manish
> Attachments: RM.txt
>
>
> We have Apache hadoop setup with Zookeeper enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117919#comment-16117919
 ] 

Hadoop QA commented on HADOOP-14705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
151 unchanged - 2 fixed = 151 total (was 153) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880786/HADOOP-14705.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 800ff8c0560a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 55a181f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/Pr