[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835311#comment-16835311
 ] 

Akira Ajisaka commented on HADOOP-16299:


I could successfully run the LdapGroupsMapping* unit tests in the following 3 
environments.
* OpenJDK 8
* OpenJDK 11.0.3
* OpenJDK 11.0.3 + {{-Djavac.version=11}} option

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835299#comment-16835299
 ] 

Dinesh Chitlangia commented on HADOOP-16302:


[~bharatviswa] - Thanks for quick review/commit.

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-16302.
-
Resolution: Fixed

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835297#comment-16835297
 ] 

Bharat Viswanadham commented on HADOOP-16302:
-

Thank You [~dineshchitlangia] for the fix.

I have committed this.

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Status: Patch Available  (was: Open)

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Attachment: HADOOP-16293-001.patch

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-16293:
--

Assignee: Prabhu Joseph

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16302:
---
Priority: Minor  (was: Major)

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16302:
---
Summary: Fix typo on Hadoop Site Help dropdown menu  (was: Fix typo on 
Hadoop Site > Help dropdown menu)

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16302) Fix typo on Hadoop Site > Help dropdown menu

2019-05-07 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16302:
---
Attachment: Screen Shot 2019-05-07 at 11.57.01 PM.png

> Fix typo on Hadoop Site > Help dropdown menu
> 
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16302) Fix typo on Hadoop Site > Help dropdown menu

2019-05-07 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HADOOP-16302:
--

 Summary: Fix typo on Hadoop Site > Help dropdown menu
 Key: HADOOP-16302
 URL: https://issues.apache.org/jira/browse/HADOOP-16302
 Project: Hadoop Common
  Issue Type: Bug
  Components: site
Affects Versions: asf-site
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
Sponsorshop.

This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #795: HDDS-1491. Ozone KeyInputStream seek() should not read the chunk file.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #795: HDDS-1491. Ozone KeyInputStream seek() 
should not read the chunk file.
URL: https://github.com/apache/hadoop/pull/795#issuecomment-490319561
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 413 | trunk passed |
   | +1 | compile | 192 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 895 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 423 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 396 | the patch passed |
   | +1 | compile | 224 | the patch passed |
   | +1 | javac | 224 | the patch passed |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | the patch passed |
   | +1 | findbugs | 450 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 150 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1352 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 34 | The patch generated 1 ASF License warnings. |
   | | | 5870 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/795 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1d638efcf0bf 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c4be3ea |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 5417 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/client 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16270) [JDK 11] Remove unintentional override of the version of Maven Dependency Plugin

2019-05-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835251#comment-16835251
 ] 

Hudson commented on HADOOP-16270:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16521 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16521/])
HADOOP-16270. [JDK 11] Remove unintentional override of the version of 
(aajisaka: rev 66c2a4ef8959daa2936a2c102f86157805634f19)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml
* (edit) hadoop-project/pom.xml


> [JDK 11] Remove unintentional override of the version of Maven Dependency 
> Plugin
> 
>
> Key: HADOOP-16270
> URL: https://issues.apache.org/jira/browse/HADOOP-16270
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16270-001.patch
>
>
> HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was 
> overridden to 3.0.1 by YARN-7129 and the following error occurred again.
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming ---
> java.lang.NoSuchMethodException: 
> jdk.internal.module.ModuleReferenceImpl.descriptor()
> at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165)
> at 
> org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90)
> at 
> org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> {noformat}
> Let's upgrade the plugin version to fix the build failure in Java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16284) KMS Cache Miss Storm

2019-05-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835245#comment-16835245
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16284 at 5/8/19 2:00 AM:
--

{quote}Do you know why the number of keys is relevant? Is the key cache 
evicting them due to size or the accesses for a particular key are more 
distributed over time vs a few highly contended keys?
{quote}
I don't manage the KMS key provider backend (CKTS) so I am afraid I can't offer 
the implementation details. IIRC, the minimum latency we observed was around 
100 ms (each KMS to CKTS connection involves PGP computation and other stuff so 
tend to be slow). I am not very sure if the latency is proportional to the 
number of encryption keys we have, but it's proportional to the number of KMS, 
because the backend has a global write lock design, and only one request is 
allowed at a time.

We saw key provider latency going as high as 20 seconds each during test when 
there are 4 KMSes. Consider an extreme case when you start KMS cold and that 
you have many encryption zone/keys, it is likely to trigger multiple cache 
misses consecutively immediately after restart. In this case, we observed KMS 
outage for several minutes after a KMS restart. After the KMS stabilizes, some 
encryption keys are rarely used and when they are used, they trigger cache miss 
from time to time.

!4 kms, no KTS patch.png|width=512!

Additionally, there's already a production workload for KMS, and KMS runs out 
of threads easily. We actually saw "No content to map" exception despite very 
low CPU utilization, and we were puzzled at first.


was (Author: jojochuang):
{quote}Do you know why the number of keys is relevant? Is the key cache 
evicting them due to size or the accesses for a particular key are more 
distributed over time vs a few highly contended keys?
{quote}
I don't manage the KMS key provider backend (CKTS) so I am afraid I can't offer 
the implementation details. IIRC, the minimum latency we observed was around 
100 ms (each KMS to CKTS connection involves PGP computation and other stuff so 
tend to be slow). I am not very sure if the latency is proportional to the 
number of encryption keys we have, but it's proportional to the number of KMS, 
because the backend has a global write lock design, and only one request is 
allowed at a time.

Se saw key provider latency going as high as 20 seconds each during test when 
there are 4 KMSes. Consider an extreme case when you start KMS cold and that 
you have many encryption zone/keys, it is likely to trigger multiple cache 
misses consecutively immediately after restart. In this case, we observed KMS 
outage for several minutes after a KMS restart. After the KMS stabilizes, some 
encryption keys are rarely used and when they are used, they trigger cache miss 
from time to time.

!4 kms, no KTS patch.png!

Additionally, there's already a production workload for KMS, and KMS runs out 
of threads easily. We actually saw "No content to map" exception despite very 
low CPU utilization, and we were puzzled at first.

> KMS Cache Miss Storm
> 
>
> Key: HADOOP-16284
> URL: https://issues.apache.org/jira/browse/HADOOP-16284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
> Environment: CDH 5.13.1, Kerberized, Cloudera Keytrustee Server
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: 4 kms, no KTS patch.png
>
>
> We recently stumble upon a performance issue with KMS, where occasionally it 
> exhibited "No content to map" error (this cluster ran an old version that 
> doesn't have HADOOP-14841) and jobs crashed. *We bumped the number of KMSes 
> from 2 to 4, and situation went even worse.*
> Later, we realized this cluster had a few hundred encryption zones and a few 
> hundred encryption keys. This is pretty unusual because most of the 
> deployments known to us has at most a dozen keys. So in terms of number of 
> keys, this cluster is 1-2 order of magnitude higher than any one else.
> The high number of encryption keys in creases the likelihood of key cache 
> miss in KMS. In Cloudera's setup, each cache miss forces KMS to sync with its 
> backend, the Cloudera Keytrustee Server. Plus the high number of KMSes 
> amplifies the latency, effectively causing a [cache miss 
> storm|https://en.wikipedia.org/wiki/Cache_stampede].
> We were able to reproduce this issue with KMS-o-meter (HDFS-14312) - I will 
> come up with a better name later surely - and discovered a scalability bug in 
> CKTS. The fix was verified again with the tool.
> Filing this bug so the community is aware of this issue. I don't have a 
> solution for now in KMS. But we want to address this scalability problem in 
> the near future because we are 

[jira] [Commented] (HADOOP-16284) KMS Cache Miss Storm

2019-05-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835245#comment-16835245
 ] 

Wei-Chiu Chuang commented on HADOOP-16284:
--

{quote}Do you know why the number of keys is relevant? Is the key cache 
evicting them due to size or the accesses for a particular key are more 
distributed over time vs a few highly contended keys?
{quote}
I don't manage the KMS key provider backend (CKTS) so I am afraid I can't offer 
the implementation details. IIRC, the minimum latency we observed was around 
100 ms (each KMS to CKTS connection involves PGP computation and other stuff so 
tend to be slow). I am not very sure if the latency is proportional to the 
number of encryption keys we have, but it's proportional to the number of KMS, 
because the backend has a global write lock design, and only one request is 
allowed at a time.

Se saw key provider latency going as high as 20 seconds each during test when 
there are 4 KMSes. Consider an extreme case when you start KMS cold and that 
you have many encryption zone/keys, it is likely to trigger multiple cache 
misses consecutively immediately after restart. In this case, we observed KMS 
outage for several minutes after a KMS restart. After the KMS stabilizes, some 
encryption keys are rarely used and when they are used, they trigger cache miss 
from time to time.

!4 kms, no KTS patch.png!

Additionally, there's already a production workload for KMS, and KMS runs out 
of threads easily. We actually saw "No content to map" exception despite very 
low CPU utilization, and we were puzzled at first.

> KMS Cache Miss Storm
> 
>
> Key: HADOOP-16284
> URL: https://issues.apache.org/jira/browse/HADOOP-16284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
> Environment: CDH 5.13.1, Kerberized, Cloudera Keytrustee Server
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: 4 kms, no KTS patch.png
>
>
> We recently stumble upon a performance issue with KMS, where occasionally it 
> exhibited "No content to map" error (this cluster ran an old version that 
> doesn't have HADOOP-14841) and jobs crashed. *We bumped the number of KMSes 
> from 2 to 4, and situation went even worse.*
> Later, we realized this cluster had a few hundred encryption zones and a few 
> hundred encryption keys. This is pretty unusual because most of the 
> deployments known to us has at most a dozen keys. So in terms of number of 
> keys, this cluster is 1-2 order of magnitude higher than any one else.
> The high number of encryption keys in creases the likelihood of key cache 
> miss in KMS. In Cloudera's setup, each cache miss forces KMS to sync with its 
> backend, the Cloudera Keytrustee Server. Plus the high number of KMSes 
> amplifies the latency, effectively causing a [cache miss 
> storm|https://en.wikipedia.org/wiki/Cache_stampede].
> We were able to reproduce this issue with KMS-o-meter (HDFS-14312) - I will 
> come up with a better name later surely - and discovered a scalability bug in 
> CKTS. The fix was verified again with the tool.
> Filing this bug so the community is aware of this issue. I don't have a 
> solution for now in KMS. But we want to address this scalability problem in 
> the near future because we are seeing use cases that requires thousands of 
> encryption keys.
> 
> On a side note, 4 KMS doesn't work well without HADOOP-14445 (and subsequent 
> fixes). A MapReduce job acquires at most 3 KMS delegation tokens, and so for 
> cases, such as distcp, it wouldn fail to reach the 4th KMS on the remote 
> cluster. I imagine similar issues exist for other execution engines, but I 
> didn't test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-05-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835244#comment-16835244
 ] 

Akira Ajisaka commented on HADOOP-16115:


bq. do you mean you've run javac -source 11 -target 11?
Yes. Now Apache Hadoop can compile on JDK 11. The following command can run 
successfully on trunk:
{noformat}
$ mvn install -DskipTests -Djavac.version=11
{noformat}
{{-Djavac.version=11}} means that the source and the target version are set to 
11.
https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-project/pom.xml#L1511

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16284) KMS Cache Miss Storm

2019-05-07 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16284:
-
Attachment: 4 kms, no KTS patch.png

> KMS Cache Miss Storm
> 
>
> Key: HADOOP-16284
> URL: https://issues.apache.org/jira/browse/HADOOP-16284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
> Environment: CDH 5.13.1, Kerberized, Cloudera Keytrustee Server
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: 4 kms, no KTS patch.png
>
>
> We recently stumble upon a performance issue with KMS, where occasionally it 
> exhibited "No content to map" error (this cluster ran an old version that 
> doesn't have HADOOP-14841) and jobs crashed. *We bumped the number of KMSes 
> from 2 to 4, and situation went even worse.*
> Later, we realized this cluster had a few hundred encryption zones and a few 
> hundred encryption keys. This is pretty unusual because most of the 
> deployments known to us has at most a dozen keys. So in terms of number of 
> keys, this cluster is 1-2 order of magnitude higher than any one else.
> The high number of encryption keys in creases the likelihood of key cache 
> miss in KMS. In Cloudera's setup, each cache miss forces KMS to sync with its 
> backend, the Cloudera Keytrustee Server. Plus the high number of KMSes 
> amplifies the latency, effectively causing a [cache miss 
> storm|https://en.wikipedia.org/wiki/Cache_stampede].
> We were able to reproduce this issue with KMS-o-meter (HDFS-14312) - I will 
> come up with a better name later surely - and discovered a scalability bug in 
> CKTS. The fix was verified again with the tool.
> Filing this bug so the community is aware of this issue. I don't have a 
> solution for now in KMS. But we want to address this scalability problem in 
> the near future because we are seeing use cases that requires thousands of 
> encryption keys.
> 
> On a side note, 4 KMS doesn't work well without HADOOP-14445 (and subsequent 
> fixes). A MapReduce job acquires at most 3 KMS delegation tokens, and so for 
> cases, such as distcp, it wouldn fail to reach the 4th KMS on the remote 
> cluster. I imagine similar issues exist for other execution engines, but I 
> didn't test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16270) [JDK 11] Remove unintentional override of the version of Maven Dependency Plugin

2019-05-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835238#comment-16835238
 ] 

Akira Ajisaka commented on HADOOP-16270:


Committed this to trunk. Thanks [~risyomei] for the contribution!

> [JDK 11] Remove unintentional override of the version of Maven Dependency 
> Plugin
> 
>
> Key: HADOOP-16270
> URL: https://issues.apache.org/jira/browse/HADOOP-16270
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-16270-001.patch
>
>
> HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was 
> overridden to 3.0.1 by YARN-7129 and the following error occurred again.
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming ---
> java.lang.NoSuchMethodException: 
> jdk.internal.module.ModuleReferenceImpl.descriptor()
> at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165)
> at 
> org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90)
> at 
> org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> {noformat}
> Let's upgrade the plugin version to fix the build failure in Java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16270) [JDK 11] Remove unintentional override of the version of Maven Dependency Plugin

2019-05-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16270:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> [JDK 11] Remove unintentional override of the version of Maven Dependency 
> Plugin
> 
>
> Key: HADOOP-16270
> URL: https://issues.apache.org/jira/browse/HADOOP-16270
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16270-001.patch
>
>
> HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was 
> overridden to 3.0.1 by YARN-7129 and the following error occurred again.
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming ---
> java.lang.NoSuchMethodException: 
> jdk.internal.module.ModuleReferenceImpl.descriptor()
> at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165)
> at 
> org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90)
> at 
> org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> {noformat}
> Let's upgrade the plugin version to fix the build failure in Java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16270) [JDK 11] Remove unintentional override of the version of Maven Dependency Plugin

2019-05-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16270:
---
Summary: [JDK 11] Remove unintentional override of the version of Maven 
Dependency Plugin  (was: [JDK11] Upgrade Maven Dependency Plugin to the latest 
version)

> [JDK 11] Remove unintentional override of the version of Maven Dependency 
> Plugin
> 
>
> Key: HADOOP-16270
> URL: https://issues.apache.org/jira/browse/HADOOP-16270
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-16270-001.patch
>
>
> HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was 
> overridden to 3.0.1 by YARN-7129 and the following error occurred again.
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming ---
> java.lang.NoSuchMethodException: 
> jdk.internal.module.ModuleReferenceImpl.descriptor()
> at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227)
> at 
> org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165)
> at 
> org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90)
> at 
> org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> {noformat}
> Let's upgrade the plugin version to fix the build failure in Java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835225#comment-16835225
 ] 

Hudson commented on HADOOP-16238:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16520 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16520/])
HADOOP-16238. Add the possbility to set SO_REUSEADDR in IPC Server (weichiu: 
rev 713e8a27aea03f302b7a7d58769c967958f6e46a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch, HADOOP-16238-005.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-07 Thread GitBox
hanishakoneru commented on a change in pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r281880291
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -296,11 +296,10 @@
 
   public static final int OZONE_SCM_DEFAULT_PORT =
   OZONE_SCM_DATANODE_PORT_DEFAULT;
-  // File Name and path where datanode ID is to written to.
+  // The path where datanode ID is to be written to.
   // if this value is not set then container startup will fail.
-  public static final String OZONE_SCM_DATANODE_ID = "ozone.scm.datanode.id";
-
-  public static final String OZONE_SCM_DATANODE_ID_PATH_DEFAULT = 
"datanode.id";
+  public static final String OZONE_SCM_DATANODE_ID_DIR =
+  "ozone.scm.datanode.id";
 
 Review comment:
   Can we add the "dir" part to the key string also. This is what would be used 
to configure this property in clusters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-07 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16238:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. If we want it can be cherry picked into lower branches.

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch, HADOOP-16238-005.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on 
console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#issuecomment-490280292
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 54 | Maven dependency ordering for branch |
   | +1 | mvninstall | 412 | trunk passed |
   | +1 | compile | 231 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 403 | the patch passed |
   | +1 | compile | 221 | the patch passed |
   | +1 | javac | 221 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 706 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1126 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 38 | The patch generated 1 ASF License warnings. |
   | | | 5185 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs |
   | uname | Linux 59a3b19cf00a 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eb9c890 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4123 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist hadoop-ozone/ozonefs U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-490276136
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 415 | trunk passed |
   | +1 | compile | 197 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 883 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 123 | trunk passed |
   | 0 | spotbugs | 242 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 425 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 400 | the patch passed |
   | +1 | compile | 202 | the patch passed |
   | +1 | javac | 202 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 798 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 135 | the patch passed |
   | +1 | findbugs | 472 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 173 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1435 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 41 | The patch generated 1 ASF License warnings. |
   | | | 5991 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 79fd163f71d1 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f0e2c6 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 5076 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-05-07 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835160#comment-16835160
 ] 

Siyao Meng commented on HADOOP-16115:
-

[~ajisakaa] Thanks for verifying this.

Regarding javac target version, do you mean you've run *javac -source 11 
-target 11*? I thought Hadoop still won't compile on JDK 11 yet.

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-490270992
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 398 | trunk passed |
   | +1 | compile | 201 | trunk passed |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 241 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 432 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 402 | the patch passed |
   | +1 | compile | 227 | the patch passed |
   | +1 | javac | 227 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 778 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 131 | the patch passed |
   | +1 | findbugs | 478 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 167 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1605 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 35 | The patch generated 1 ASF License warnings. |
   | | | 6340 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c340489b69ee 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f0e2c6 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4533 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-07 Thread GitBox
arp7 merged pull request #797: HDDS-1489. Unnecessary log messages on console 
with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-07 Thread GitBox
bharatviswa504 commented on issue #788: HDDS-1475 : Fix OzoneContainer start 
method.
URL: https://github.com/apache/hadoop/pull/788#issuecomment-490256355
 
 
   Thank You @avijayanhwx for the contribution.
   I will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-07 Thread GitBox
bharatviswa504 merged pull request #788: HDDS-1475 : Fix OzoneContainer start 
method.
URL: https://github.com/apache/hadoop/pull/788
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-07 Thread GitBox
bharatviswa504 commented on issue #788: HDDS-1475 : Fix OzoneContainer start 
method.
URL: https://github.com/apache/hadoop/pull/788#issuecomment-490255947
 
 
   Test failures are not related to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted, which leads to an unnecessary disk check

2019-05-07 Thread eBugs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eBugs updated HADOOP-16295:
---
Summary: FileUtil.replaceFile() throws an IOException when it is 
interrupted, which leads to an unnecessary disk check  (was: 
FileUtil.replaceFile() throws an IOException when it is interrupted)

> FileUtil.replaceFile() throws an IOException when it is interrupted, which 
> leads to an unnecessary disk check
> -
>
> Key: HADOOP-16295
> URL: https://issues.apache.org/jira/browse/HADOOP-16295
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: eBugs
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
> Line: 1387
> {code:java}
> throw new IOException("replaceFile interrupted.");{code}
>  
> An {{IOException}} can mean many different errors, while the error message 
> indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
> problem. For example, the callers trying to handle other {{IOException}} may 
> accidentally (and incorrectly) handle the interrupt. An 
> {{InterruptedIOException}} may be better here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-07 Thread GitBox
avijayanhwx opened a new pull request #799: HDDS-1451 : SCMBlockManager 
findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799
 
 
   The getPipelines() and createPipeline() already seem to have a lock in their 
implementation. However, the problem described here involves a race condition 
between the call to getPipelines and createPipelines in 
BlockManagerImpl#allocateBlock. The fix is to add another getPipelines check 
after a failed createPipeline call to get any newly created pipelines. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-05-07 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835084#comment-16835084
 ] 

CR Hota commented on HADOOP-16268:
--

License warning issue seems unrelated.

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support Opentracing

2019-05-07 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835080#comment-16835080
 ] 

stack commented on HADOOP-15566:


Thanks for the pointer [~bogdandrutu]

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-490242490
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1137 | trunk passed |
   | +1 | compile | 1293 | trunk passed |
   | +1 | checkstyle | 148 | trunk passed |
   | +1 | mvnsite | 127 | trunk passed |
   | +1 | shadedclient | 1060 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 94 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 184 | trunk passed |
   | -0 | patch | 100 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 1010 | the patch passed |
   | +1 | javac | 1010 | the patch passed |
   | -0 | checkstyle | 151 | root: The patch generated 51 new + 64 unchanged - 
0 fixed = 115 total (was 64) |
   | +1 | mvnsite | 119 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 32 | hadoop-tools_hadoop-aws generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) |
   | -1 | findbugs | 73 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 556 | hadoop-common in the patch passed. |
   | +1 | unit | 301 | hadoop-aws in the patch passed. |
   | -1 | asflicense | 56 | The patch generated 1 ASF License warnings. |
   | | | 7451 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostFirst 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
69-89] |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostLast 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
98-109] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux d6fa208c0967 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8ecbf61 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1446 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/22/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git 

[jira] [Commented] (HADOOP-15566) Support Opentracing

2019-05-07 Thread Bogdan Drutu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835083#comment-16835083
 ] 

Bogdan Drutu commented on HADOOP-15566:
---

[~smeng] the biggest difference will be how we do in-process context 
propagation. That may be a bit more work.

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281808353
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
 ##
 @@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.util.Optional;
+import java.util.function.Function;
+
+import com.google.common.util.concurrent.ListeningExecutorService;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.Invoker;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3AInstrumentation;
+import org.apache.hadoop.fs.s3a.S3AStorageStatistics;
+import org.apache.hadoop.fs.s3a.Statistic;
+import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.SemaphoredDelegatingExecutor;
+
+/**
+ * This class provides the core context of the S3A filesystem to subsidiary
+ * components, without exposing the entire parent class.
+ * This is eliminate explicit recursive coupling.
+ *
+ * Where methods on the FS are to be invoked, they are all passed in
+ * via functional interfaces, so test setups can pass in mock callbacks
+ * instead.
+ *
+ * Warning: this really is private and unstable. Do not use
+ * outside the org.apache.hadoop.fs.s3a package.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class StoreContext {
+
+  /** Filesystem URI. */
+  private final URI fsURI;
+
+  /** Bucket name */
+  private final String bucket;
+
+  /** FS configuration after all per-bucket overrides applied. */
+  private final Configuration configuration;
+
+  /** Username. */
+  private final String username;
+
+  /** Principal who created the FS. */
+  private final UserGroupInformation owner;
+
+  /**
+   * Location of a bucket.
+   * Optional as the AWS call to evaluate this may fail from a permissions
+   * or other IOE.
+   */
+  public final Optional bucketLocation;
+
+  /**
+   * Bounded thread pool for async operations.
+   */
+  private final ListeningExecutorService executor;
+
+  /**
+   * Capacity of new executors created.
+   */
+  private final int executorCapacity;
+
+  /** Invoker of operations. */
+  private final Invoker invoker;
+
+  /* Instrumentation and statistics. */
+  private final S3AInstrumentation instrumentation;
+  private final S3AStorageStatistics storageStatistics;
+
+  /** Seek policy. */
+  private final S3AInputPolicy inputPolicy;
+
+  /** How to react to changes in etags and versions. */
+  private final ChangeDetectionPolicy changeDetectionPolicy;
+
+  /** Evaluated options. */
+  private final boolean multiObjectDeleteEnabled;
+
+  /** List algorithm. */
+  private final boolean useListV1;
+
+  /** Is the store versioned? */
+  private final boolean versioned;
+
+  /**
+   * To allow this context to be passed down to the metastore, this field
+   * wll be null until initialized.
+   */
+  private final MetadataStore metadataStore;
+
+  /** Function to take a key and return a path. */
+  private final Function keyToPathQualifier;
+
+  /** Factory for temporary files. */
+  private final TempFileFactory tempFileFactory;
+
+  /**
+   * Instantiate.
+   * No attempt to use a builder here as outside tests
+   * this should only be created in the S3AFileSystem.
+   */
+  public StoreContext(final URI fsURI,
+  final String bucket,
+  final Configuration configuration,
+  final String username,
+  final UserGroupInformation owner,
+  final ListeningExecutorService executor,
+  final int executorCapacity,
+  final Invoker invoker,
+  final S3AInstrumentation instrumentation,
+  final S3AStorageStatistics storageStatistics,
+  final S3AInputPolicy inputPolicy,
+  final ChangeDetectionPolicy 

[jira] [Commented] (HADOOP-16300) ebugs automated bug checker is reporting exception issues

2019-05-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835067#comment-16835067
 ] 

Steve Loughran commented on HADOOP-16300:
-

[~ebugs-in-cloud-systems] -can you subscribe to the hadoop common dev list and 
talk about your work before doing the submissions? We are worried that we'll 
get overloaded with JIRAs. Some bulk "Hadoop common has issues" single JIRA are 
often better to handle this way

> ebugs automated bug checker is reporting exception issues
> -
>
> Key: HADOOP-16300
> URL: https://issues.apache.org/jira/browse/HADOOP-16300
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: eBugs
>Priority: Major
>
> "ebugs-in-cloud-systems" is reporting issues related to exception handling
> Some of them may be be correct, some of them appear to have misunderstood 
> issues.
> For reference
> * we aren't going to change the signature of public APIs, so new exception 
> classes will not be added. anything which suggests that will be WONTFIX.
> * we probably aren't going to change code which throws unchecked exceptions 
> to checked ones unless there is a good reason.
> * we can and do tighten the exceptions thrown in failures (e.g. replace an 
> IOException with an InterruptedIOException. Patches welcome there, with tests.
> * making sure we don't lose the stack traces of inner causes would be nice 
> too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281808353
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
 ##
 @@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.util.Optional;
+import java.util.function.Function;
+
+import com.google.common.util.concurrent.ListeningExecutorService;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.Invoker;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3AInstrumentation;
+import org.apache.hadoop.fs.s3a.S3AStorageStatistics;
+import org.apache.hadoop.fs.s3a.Statistic;
+import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.SemaphoredDelegatingExecutor;
+
+/**
+ * This class provides the core context of the S3A filesystem to subsidiary
+ * components, without exposing the entire parent class.
+ * This is eliminate explicit recursive coupling.
+ *
+ * Where methods on the FS are to be invoked, they are all passed in
+ * via functional interfaces, so test setups can pass in mock callbacks
+ * instead.
+ *
+ * Warning: this really is private and unstable. Do not use
+ * outside the org.apache.hadoop.fs.s3a package.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class StoreContext {
+
+  /** Filesystem URI. */
+  private final URI fsURI;
+
+  /** Bucket name */
+  private final String bucket;
+
+  /** FS configuration after all per-bucket overrides applied. */
+  private final Configuration configuration;
+
+  /** Username. */
+  private final String username;
+
+  /** Principal who created the FS. */
+  private final UserGroupInformation owner;
+
+  /**
+   * Location of a bucket.
+   * Optional as the AWS call to evaluate this may fail from a permissions
+   * or other IOE.
+   */
+  public final Optional bucketLocation;
+
+  /**
+   * Bounded thread pool for async operations.
+   */
+  private final ListeningExecutorService executor;
+
+  /**
+   * Capacity of new executors created.
+   */
+  private final int executorCapacity;
+
+  /** Invoker of operations. */
+  private final Invoker invoker;
+
+  /* Instrumentation and statistics. */
+  private final S3AInstrumentation instrumentation;
+  private final S3AStorageStatistics storageStatistics;
+
+  /** Seek policy. */
+  private final S3AInputPolicy inputPolicy;
+
+  /** How to react to changes in etags and versions. */
+  private final ChangeDetectionPolicy changeDetectionPolicy;
+
+  /** Evaluated options. */
+  private final boolean multiObjectDeleteEnabled;
+
+  /** List algorithm. */
+  private final boolean useListV1;
+
+  /** Is the store versioned? */
+  private final boolean versioned;
+
+  /**
+   * To allow this context to be passed down to the metastore, this field
+   * wll be null until initialized.
+   */
+  private final MetadataStore metadataStore;
+
+  /** Function to take a key and return a path. */
+  private final Function keyToPathQualifier;
+
+  /** Factory for temporary files. */
+  private final TempFileFactory tempFileFactory;
+
+  /**
+   * Instantiate.
+   * No attempt to use a builder here as outside tests
+   * this should only be created in the S3AFileSystem.
+   */
+  public StoreContext(final URI fsURI,
+  final String bucket,
+  final Configuration configuration,
+  final String username,
+  final UserGroupInformation owner,
+  final ListeningExecutorService executor,
+  final int executorCapacity,
+  final Invoker invoker,
+  final S3AInstrumentation instrumentation,
+  final S3AStorageStatistics storageStatistics,
+  final S3AInputPolicy inputPolicy,
+  final ChangeDetectionPolicy 

[GitHub] [hadoop] hadoop-yetus commented on issue #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #788: HDDS-1475 : Fix OzoneContainer start 
method.
URL: https://github.com/apache/hadoop/pull/788#issuecomment-490234750
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 101 | Maven dependency ordering for branch |
   | +1 | mvninstall | 800 | trunk passed |
   | +1 | compile | 537 | trunk passed |
   | +1 | checkstyle | 142 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1053 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 419 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 394 | the patch passed |
   | +1 | compile | 192 | the patch passed |
   | +1 | javac | 192 | the patch passed |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 125 | the patch passed |
   | +1 | findbugs | 426 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 142 | hadoop-hdds in the patch failed. |
   | -1 | unit | 820 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 39 | The patch generated 1 ASF License warnings. |
   | | | 6216 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/788 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f4f8d91f6f78 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8ecbf61 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4688 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835069#comment-16835069
 ] 

Hadoop QA commented on HADOOP-16268:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
3m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968086/HADOOP-16268.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76f8c82e0ed7 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8ecbf61 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16234/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16234/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1346 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16234/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Allow custom wrapped exception to be 

[jira] [Commented] (HADOOP-16300) ebugs automated bug checker is reporting exception issues

2019-05-07 Thread eBugs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835070#comment-16835070
 ] 

eBugs commented on HADOOP-16300:


Definitely. Actually, we've uploaded all the suspicious code we've found so 
far. But will do as you suggest in the future. Apologize again for the 
inconvenience. 

> ebugs automated bug checker is reporting exception issues
> -
>
> Key: HADOOP-16300
> URL: https://issues.apache.org/jira/browse/HADOOP-16300
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: eBugs
>Priority: Major
>
> "ebugs-in-cloud-systems" is reporting issues related to exception handling
> Some of them may be be correct, some of them appear to have misunderstood 
> issues.
> For reference
> * we aren't going to change the signature of public APIs, so new exception 
> classes will not be added. anything which suggests that will be WONTFIX.
> * we probably aren't going to change code which throws unchecked exceptions 
> to checked ones unless there is a good reason.
> * we can and do tighten the exceptions thrown in failures (e.g. replace an 
> IOException with an InterruptedIOException. Patches welcome there, with tests.
> * making sure we don't lose the stack traces of inner causes would be nice 
> too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835068#comment-16835068
 ] 

Steve Loughran commented on HADOOP-16293:
-

thanks for spotting this. submit a PR and I'll merge it in

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16280) S3Guard: Retry failed read with backoff in Authoritative mode when file can be opened

2019-05-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835062#comment-16835062
 ] 

Steve Loughran commented on HADOOP-16280:
-

we should be retrying here. as in.read() will call S3AInputStream.reopen() 
which will spin when s3guard is in use

If you are seeing this, what are your retry attempts/delay options?

> S3Guard: Retry failed read with backoff in Authoritative mode when file can 
> be opened
> -
>
> Key: HADOOP-16280
> URL: https://issues.apache.org/jira/browse/HADOOP-16280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Priority: Major
>
> When using S3Guard in authoritative mode a file can be reported from AWS S3 
> that's missing like it is described in the following exception:
> {noformat}
> java.io.FileNotFoundException: re-open 
> s3a://cloudera-dev-gabor-ireland/test/TMCDOR-021df1ad-633f-47b8-97f5-6cd93f0b82d0
>  at 0 on 
> s3a://cloudera-dev-gabor-ireland/test/TMCDOR-021df1ad-633f-47b8-97f5-6cd93f0b82d0:
>  
> com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not 
> exist. (Service: Amazon S3; Status Code: 404; Error 
> Code: NoSuchKey; Request ID: E1FF9EA9B5DBBD7E; S3 Extended Request ID: 
> NzNIL4+dyA89WTnfbcwuYQK+hCfx51TfavwgC3oEvQI0IQ9M/zAspbXOfBIis8/nTolc4tRB9ik=),
>  S3 Extended Request ID: 
> NzNIL4+dyA89WTnfbcwuYQK+hCfx51TfavwgC3oEvQI0IQ9M/zAspbXOfBIis8/nTolc4tRB9ik=:NoSuchKey
> {noformat}
> But the metadata in S3Guard (e.g dynamo db) is there, so it can be opened. 
> The operation will not fail when it's opened, it will fail when we try to 
> read it, so the call
> {noformat}
> FSDataInputStream is = guardedFs.open(testFilePath);{noformat}
> won't fail, but the next call
> {noformat}
> byte[] firstRead = new byte[text.length()];
> is.read(firstRead, 0, firstRead.length);
> {noformat}
> will fail with the exception message like what's above.
> Once Authoritative mode is on, we assume that there is no out of band 
> operation, so the file will appear eventually. We should re-try in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835063#comment-16835063
 ] 

Steve Loughran commented on HADOOP-16279:
-

Gabor, I'm happy to get rid of LocalMedadataStore. It was useful for a while, 
but with DDB offering on-demand, I don't see why a test-only feature needs to 
be retained any more

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281806151
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/PathToBucketKeys.java
 ##
 @@ -0,0 +1,22 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+public class PathToBucketKeys {
 
 Review comment:
   Is this used anywhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281802726
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1199,115 +1229,248 @@ private boolean innerRename(Path source, Path dest)
   }
 }
 
-// If we have a MetadataStore, track deletions/creations.
-Collection srcPaths = null;
-List dstMetas = null;
-if (hasMetadataStore()) {
-  srcPaths = new HashSet<>(); // srcPaths need fast look up before put
-  dstMetas = new ArrayList<>();
-}
-// TODO S3Guard HADOOP-13761: retries when source paths are not visible yet
-// TODO S3Guard: performance: mark destination dirs as authoritative
-
-// Ok! Time to start
-if (srcStatus.isFile()) {
-  LOG.debug("rename: renaming file {} to {}", src, dst);
-  long length = srcStatus.getLen();
-  if (dstStatus != null && dstStatus.isDirectory()) {
-String newDstKey = maybeAddTrailingSlash(dstKey);
-String filename =
-srcKey.substring(pathToKey(src.getParent()).length()+1);
-newDstKey = newDstKey + filename;
-copyFile(srcKey, newDstKey, length);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src,
-keyToQualifiedPath(newDstKey), length, getDefaultBlockSize(dst),
-username);
-  } else {
-copyFile(srcKey, dstKey, srcStatus.getLen());
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src, dst,
-length, getDefaultBlockSize(dst), username);
-  }
-  innerDelete(srcStatus, false);
-} else {
-  LOG.debug("rename: renaming directory {} to {}", src, dst);
+// Validation completed: time to begin the operation.
+// The store-specific rename operation is used to keep the store
+// to date with the in-progress operation.
+// for the null store, these are all no-ops.
+final RenameTracker renameTracker =
+metadataStore.initiateRenameOperation(
+createStoreContext(),
+src, srcStatus, dest);
+final AtomicLong bytesCopied = new AtomicLong();
+int renameParallelLimit = 10;
 
 Review comment:
   Maybe add a comment about how you determined 10 to be optimal, and why it 
doesn't need to be configurable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281801558
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -138,9 +138,15 @@ private Constants() {
   public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT =
   SimpleAWSCredentialsProvider.NAME;
 
+
+  // the maximum number of tasks cached if all threads are already uploading
+  public static final String MAX_TOTAL_TASKS = "fs.s3a.max.total.tasks";
+
+  public static final int DEFAULT_MAX_TOTAL_TASKS = 5;
+
   // number of simultaneous connections to s3
   public static final String MAXIMUM_CONNECTIONS = "fs.s3a.connection.maximum";
-  public static final int DEFAULT_MAXIMUM_CONNECTIONS = 15;
+  public static final int DEFAULT_MAXIMUM_CONNECTIONS = 
DEFAULT_MAX_TOTAL_TASKS * 2;
 
 Review comment:
   should match core-default.xml? 48 there


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281801365
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -138,9 +138,15 @@ private Constants() {
   public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT =
   SimpleAWSCredentialsProvider.NAME;
 
+
+  // the maximum number of tasks cached if all threads are already uploading
+  public static final String MAX_TOTAL_TASKS = "fs.s3a.max.total.tasks";
+
+  public static final int DEFAULT_MAX_TOTAL_TASKS = 5;
 
 Review comment:
   should match core-default.xml? 32 there


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
noslowerdna commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281801148
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -283,6 +285,22 @@ private Constants() {
   @InterfaceStability.Unstable
   public static final int DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS = 4;
 
+  /**
+   * The capacity of executor queues for operations other than block
+   * upload, where {@link #FAST_UPLOAD_ACTIVE_BLOCKS} is used instead.
+   * This should be less than {@link #MAX_THREADS} for fair
+   * submission.
+   * Value: {@value}.
+   */
+  public static final String EXECUTOR_CAPACITY = "fs.s3a.executor.capacity";
+
+  /**
+   * The capacity of executor queues for operations other than block
+   * upload, where {@link #FAST_UPLOAD_ACTIVE_BLOCKS} is used instead.
+   * Value: {@value}
+   */
+  public static final int DEFAULT_EXECUTOR_CAPACITY = 10;
 
 Review comment:
   should match core-default.xml? 16 there


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
bshashikant commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-490224568
 
 
   The 1 test failure reported is not related to the patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-49070
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 398 | trunk passed |
   | +1 | compile | 209 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 863 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | trunk passed |
   | 0 | spotbugs | 240 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 420 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 406 | the patch passed |
   | +1 | compile | 205 | the patch passed |
   | +1 | javac | 205 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | the patch passed |
   | +1 | findbugs | 451 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 155 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1197 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5651 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e291fdb1560b 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1a696cc |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/testReport/ |
   | Max. process+thread count | 4612 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on 
console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#issuecomment-490220836
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for branch |
   | +1 | mvninstall | 390 | trunk passed |
   | +1 | compile | 210 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 428 | the patch passed |
   | +1 | compile | 210 | the patch passed |
   | +1 | javac | 210 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 656 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 139 | hadoop-hdds in the patch failed. |
   | -1 | unit | 845 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 43 | The patch generated 1 ASF License warnings. |
   | | | 4873 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs |
   | uname | Linux 27c7ba8ec768 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8ecbf61 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4282 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist hadoop-ozone/ozonefs U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-490219003
 
 
   +testing. S3 ireland (versioned store), with/without auth. Also: local. I've 
expressed interest in removing the local mode as its a distraction in tests (it 
doesn't match production, what does it prove?): the changes here have the 
potential to amplify that mismatch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant merged pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
bshashikant merged pull request #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread GitBox
steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-490218551
 
 
   This patch is now of a state where it is ready for review
   
   * it's going to have to be changed to keep up with the S3Guard versioning 
patches so I'm hoping to nurture those in, but the incompatibilities are 
related to the type of FileStatus passed around & general git merge problems, 
rather than functional conflict.
   
   There's one production side improvement I'd like to add.
   
   This new patch does the move incrementally: whenever you add a file we call 
s3guard.move(null, dest-file-status) to add the destination (and ancestors), on 
a bulk delete we update the deletes, 
   
   But: that move(List, List) call creates all the parent paths, relying on a 
hash table to avoid duplicates,. Once you move to single-file additions then 
both that and metastore.put() are creating too many entries due to their need 
to meet the goal of "no duplicates". I want to restore the original behavior by 
passing in to the metastore the map being built up in the rename tracker, so it 
knows what already exists. (Note: this all needs to be done thread safely, so 
that when > 1 copy completes...I don't want the locks for that to also block 
other updates to the metastore)
   
   This isn't a functionality change, it's a performance and cost improvement, 
one designed to keep those DDB write IOPs down.
   
   ## Please take a look at the code as it stands.
   
   The architecture is based on my [refactoring 
S3A](https://github.com/steveloughran/engineering-proposals/blob/master/refactoring-s3a.md)
 doc -the new classes are designed to work with the new `StoreContext` class; 
the metastore moves with this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-490217435
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 15 | https://github.com/apache/hadoop/pull/749 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/14/console |
   | versions | git=1.9.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #798: HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-490215309
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 405 | trunk passed |
   | +1 | compile | 195 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 815 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 235 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 411 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 404 | the patch passed |
   | +1 | compile | 210 | the patch passed |
   | +1 | javac | 210 | the patch passed |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 117 | the patch passed |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 136 | hadoop-hdds in the patch failed. |
   | -1 | unit | 867 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5197 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e994bc971635 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1a696cc |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/testReport/ |
   | Max. process+thread count | 4725 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2019-05-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835029#comment-16835029
 ] 

Steve Loughran commented on HADOOP-15183:
-

For people watching this JIRA, the PR #654 is ready for people to look at: 
https://github.com/apache/hadoop/pull/654

There's still a little bit more I'd like to do but the core code is there and 
functional

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support Opentracing

2019-05-07 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835016#comment-16835016
 ] 

Siyao Meng commented on HADOOP-15566:
-

[~bogdandrutu] Thanks. We'll look into that a bit.
As long as OpenTelemetry has similar APIs to OpenTracing, it shouldn't be hard 
to migrate.
And it doesn't contradict with the current goal of minimizing changes to the 
existing Hadoop codebase.

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support Opentracing

2019-05-07 Thread Bogdan Drutu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835006#comment-16835006
 ] 

Bogdan Drutu commented on HADOOP-15566:
---

[~jojochuang] you should probably consider to use the new merged project 
between OpenCensus and OpenTracing :) 
[https://github.com/cncf/toc/pull/233/files] - probably will be available in 
the next couple of months (deadline is 6th of July)

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-05-07 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HADOOP-16268:
-
Attachment: HADOOP-16268.001.patch
Status: Patch Available  (was: Open)

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-05-07 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834997#comment-16834997
 ] 

CR Hota commented on HADOOP-16268:
--

[~xkrogen] Could you also help take a look at this change and share your 
thoughts?

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-07 Thread GitBox
bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix 
OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r281770846
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
 ##
 @@ -160,8 +160,12 @@ private void startContainerScrub() {
   LOG.info("Background container scrubber has been disabled by {}",
   HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED);
 } else {
-  this.scrubber = new ContainerScrubber(containerSet, config);
-  scrubber.up();
+  if (this.scrubber == null) {
+this.scrubber = new ContainerScrubber(containerSet, config);
+  }
+  if (this.scrubber.isHalted()) {
 
 Review comment:
   I think we don't need this check here, as scrubber up() is already taking 
care of multiple starts.
   public void up() {
   
   this.halt = false;
   if (this.scrubThread == null) {
 this.scrubThread = new Thread(this);
 scrubThread.start();
   } else {
 LOG.info("Scrubber up called multiple times. Scrub thread already 
up.");
   }
 }


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #784: HADOOP-16050: s3a SSL connections should use OpenSSL

2019-05-07 Thread GitBox
steveloughran commented on issue #784: HADOOP-16050: s3a SSL connections should 
use OpenSSL
URL: https://github.com/apache/hadoop/pull/784#issuecomment-490186629
 
 
   Patch LGTM. We're changing the scope of the wildfly lib from compile to 
runtime in ABFS -is everyone happy with this? I am. 
   
   Test results seem good too.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16301) No enum constant Operation.GET_BLOCK_LOCATIONS

2019-05-07 Thread Roksolana Diachuk (JIRA)
Roksolana Diachuk created HADOOP-16301:
--

 Summary: No enum constant Operation.GET_BLOCK_LOCATIONS 
 Key: HADOOP-16301
 URL: https://issues.apache.org/jira/browse/HADOOP-16301
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, fs
Affects Versions: 2.8.5, 2.7.7, 2.9.2, 2.7.6, 2.8.4, 2.9.1, 3.0.0, 2.7.5, 
2.8.3, 2.8.2, 2.8.1, 2.7.4, 2.9.0, 2.7.3, 2.7.2, 2.7.1, 2.8.0, 2.7.0, 2.7.8, 
2.8.6
 Environment: Running on Ubuntu 16.04

Hadoop v2.7.4

Minikube v1.0.1

Scala v2.11

Spark v2.4.2

 
Reporter: Roksolana Diachuk


I was trying to read Avro files contents from HDFS using Spark application and 
Httpfs configured in minikube (for using Kubernetes locally). Each time I try 
to read the files I get this exception:
{code:java}
Exception in thread "main" 
org.apache.hadoop.ipc.RemoteException(com.sun.jersey.api.ParamException$QueryParamException):
 java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.GET_BLOCK_LOCATIONS
 at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:118)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:367)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:625)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:472)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:502)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:498)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileBlockLocations(WebHdfsFileSystem.java:1420)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileBlockLocations(WebHdfsFileSystem.java:1404)
 at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:343)
 at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
 at scala.Option.getOrElse(Option.scala:121)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
 at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
 at scala.Option.getOrElse(Option.scala:121)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
 at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
 at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
 at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
 at spark_test.TestSparkJob$.main(TestSparkJob.scala:48)
 at spark_test.TestSparkJob.main(TestSparkJob.scala){code}
 

I access HDFS using Httpfs setup in Kubernetes. So my Spark application runs 
outside of the K8s cluster therefore, all the services are accessed using 
NodePorts. When I launch the Spark app inside of the K8s cluster and use only 
HDFS client or WebHDFS, I can get all the files contents. The error occurs only 
when I execute an app outside of the cluster and that is when I access HDFS 
using Httpfs.

So I checked Hadoop sources and I have found out that there is no such enum as 
GET_BLOCK_LOCATIONS. It is named GETFILEBLOCKLOCATIONS in Operation enum by 
[this 
link|[https://github.com/apache/hadoop/blob/release-2.7.4-RC0/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java]].
 And the same applies to all the Hadoop versions I have checked (2.7.4 and 
higher). 



The conclusion would be that HDFS and HttpFs are not compatible with operations 
names. But it may be true for other operations. So It is not yet possible to 
read the data from HDFS using Httpfs. 
Is it possible to fix this error somehow?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-07 Thread GitBox
bharatviswa504 opened a new pull request #798: HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16298) Manage/Renew delegation tokens for externally scheduled jobs

2019-05-07 Thread Clay B. (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834961#comment-16834961
 ] 

Clay B. commented on HADOOP-16298:
--

Pinging [~owen.omalley], [~bosco], [~lmc...@apache.org] from our discussion on 
this at DataWorks Summit.

> Manage/Renew delegation tokens for externally scheduled jobs
> 
>
> Key: HADOOP-16298
> URL: https://issues.apache.org/jira/browse/HADOOP-16298
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.3, 2.9.0, 3.2.0, 3.3.0
>Reporter: Pankaj Deshpande
>Priority: Major
> Attachments: Proposal for changes to UGI for managing_renewing 
> externally managed delegation tokens.pdf
>
>
> * Presently when jobs are run in the Hadoop ecosystem, the implicit 
> assumption is that YARN will be used as a scheduling agent with access to 
> appropriate keytabs for renewal of kerberos tickets and delegation tokens. 
>  * Jobs that interact with kerberized hadoop services such as hbase/hive/hdfs 
> and use an external scheduler such as Kubernetes, typically do not have 
> access to keytabs. In such cases, delegation tokens are a logical choice for 
> interacting with a kerberized cluster. These tokens are issued based on some 
> external auth mechanism (such as Kube LDAP authentication).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-07 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834960#comment-16834960
 ] 

Gabor Bota commented on HADOOP-16279:
-

I'm using a separate value in my solution.
I will provide a PR tomorrow - LocalMetadataStore's prune is more complicated 
than I expected.

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834908#comment-16834908
 ] 

Eric Yang commented on HADOOP-16287:


[~Prabhu Joseph] thank you for the patch.  Patch 003 has two problems.  It adds 
request attribute for doAsUser.  If downstream logic does not look at request 
attribute for doAsUser, the request has the full privileges of authenticated 
user.  If down stream logic set another request attribute doAsUser, the request 
can switch to any user.  This is not secure.  Second problem, in catching 
AuthorizationException, it does not return immediately after produce a response 
for FORBIDDEN.  The request will continue to process the filter chain.  This 
will leak more data to caller, if the caller continue to listen for packets 
after getting FORBIDDEN message without disconnect.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #726: HDDS-1424. Support multi-container robot test execution

2019-05-07 Thread GitBox
elek closed pull request #726: HDDS-1424. Support multi-container robot test 
execution
URL: https://github.com/apache/hadoop/pull/726
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #726: HDDS-1424. Support multi-container robot test execution

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #726: HDDS-1424. Support multi-container robot 
test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-490140291
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/726 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted

2019-05-07 Thread eBugs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834899#comment-16834899
 ] 

eBugs commented on HADOOP-16295:


After looking at the related code, I found that one of FileUtil.replaceFile()'s 
callers, FileIoProvider.replaceFile(), catches all {{Exception}} and checks for 
disk error:
{code:java}
public void replaceFile(...) throws IOException {
  ...
  try {
...
FileUtil.replaceFile(src, target);
...
  } catch(Exception e) {
onFailure(volume, begin); // This calls DataNode.checkDiskErrorAsync()
throw e;
  }
}{code}
 

If the exception is thrown because of an interrupt, maybe the disk check can be 
skipped? If so, throwing an {{InterruptedIOException}} makes it easier to 
differentiate interrupts from actual file system errors, which also throw 
{{IOExceptions}}, e.g., FileUtil.replaceFile()#line1390-1394:
{code:java}
if (!src.renameTo(target)) {
  throw new IOException("Unable to rename " + src +
" to " + target);
}
{code}

> FileUtil.replaceFile() throws an IOException when it is interrupted
> ---
>
> Key: HADOOP-16295
> URL: https://issues.apache.org/jira/browse/HADOOP-16295
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: eBugs
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
> Line: 1387
> {code:java}
> throw new IOException("replaceFile interrupted.");{code}
>  
> An {{IOException}} can mean many different errors, while the error message 
> indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
> problem. For example, the callers trying to handle other {{IOException}} may 
> accidentally (and incorrectly) handle the interrupt. An 
> {{InterruptedIOException}} may be better here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834883#comment-16834883
 ] 

Hadoop QA commented on HADOOP-16294:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
1s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
58s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/796 |
| JIRA Issue | HADOOP-16294 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 78ad66f77a26 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 49e1292 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/3/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| 

[GitHub] [hadoop] hadoop-yetus commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-490130377
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1184 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 937 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 55 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 898 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4436 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/796 |
   | JIRA Issue | HADOOP-16294 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 78ad66f77a26 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 49e1292 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/3/testReport/ |
   | Max. process+thread count | 312 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-05-07 Thread GitBox
elek commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281687356
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Ok, after some thinking I am understand. It may not be required all the time 
if we have more advanced tests. For example if the test plan contains a longer 
freon run, the basic test can be removed. 
   
   But it's fast and an additional safety level (we don't start any test if 
basic freon doesn't work), so not a big problem but we can improve it later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281688178
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -121,12 +121,12 @@
   TimeDuration.valueOf(3000, TimeUnit.MILLISECONDS);
   public static final String DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_KEY =
   "dfs.ratis.client.request.max.retries";
-  public static final int DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_DEFAULT = 20;
+  public static final int DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_DEFAULT = 180;
 
 Review comment:
   What's the purpose of modifying it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281677759
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -0,0 +1,344 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.client.io;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.storage.BufferPool;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+
+/**
+ * This class manages the stream entries list and handles block allocation
+ * from OzoneManager.
+ */
+public class BlockOutputStreamEntryPool {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(BlockOutputStreamEntryPool.class);
+
+  private final List streamEntries;
+  private int currentStreamIndex;
+  private final OzoneManagerProtocol omClient;
+  private final OmKeyArgs keyArgs;
+  private final XceiverClientManager xceiverClientManager;
+  private final int chunkSize;
+  private final String requestID;
+  private final long streamBufferFlushSize;
+  private final long streamBufferMaxSize;
+  private final long watchTimeout;
+  private final long blockSize;
+  private final int bytesPerChecksum;
+  private final ContainerProtos.ChecksumType checksumType;
+  private final BufferPool bufferPool;
+  private OmMultipartCommitUploadPartInfo commitUploadPartInfo;
+  private final long openID;
+  private ExcludeList excludeList;
+
+  @SuppressWarnings("parameternumber")
+  public BlockOutputStreamEntryPool(OzoneManagerProtocol omClient,
+  int chunkSize, String requestId, HddsProtos.ReplicationFactor factor,
+  HddsProtos.ReplicationType type, long bufferFlushSize, long 
bufferMaxSize,
+  long size, long watchTimeout, ContainerProtos.ChecksumType checksumType,
+  int bytesPerChecksum, String uploadID, int partNumber,
+  boolean isMultipart, OmKeyInfo info,
+  XceiverClientManager xceiverClientManager, long openID) {
+streamEntries = new ArrayList<>();
+currentStreamIndex = 0;
+this.omClient = omClient;
+this.keyArgs = new OmKeyArgs.Builder().setVolumeName(info.getVolumeName())
+.setBucketName(info.getBucketName()).setKeyName(info.getKeyName())
+.setType(type).setFactor(factor).setDataSize(info.getDataSize())
+.setIsMultipartKey(isMultipart).setMultipartUploadID(uploadID)
+.setMultipartUploadPartNumber(partNumber).build();
+this.xceiverClientManager = xceiverClientManager;
+this.chunkSize = chunkSize;
+this.requestID = requestId;
+this.streamBufferFlushSize = bufferFlushSize;
+this.streamBufferMaxSize = bufferMaxSize;
+this.blockSize = size;
+this.watchTimeout = watchTimeout;
+this.bytesPerChecksum = bytesPerChecksum;
+this.checksumType = checksumType;
+this.openID = openID;
+this.excludeList = new ExcludeList();
+
+Preconditions.checkState(chunkSize > 0);
+Preconditions.checkState(streamBufferFlushSize > 0);
+Preconditions.checkState(streamBufferMaxSize > 0);
+Preconditions.checkState(blockSize > 0);
+Preconditions.checkState(streamBufferFlushSize % chunkSize == 0);
+Preconditions.checkState(streamBufferMaxSize % streamBufferFlushSize == 0);
+Preconditions.checkState(blockSize % streamBufferMaxSize == 0);
+this.bufferPool =
+new BufferPool(chunkSize, (int) streamBufferMaxSize / chunkSize);
+  }
+
+  public 

[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281678783
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
 ##
 @@ -69,7 +71,8 @@
  * The underlying RPC mechanism can be chosen via the constructor.
  */
 public final class XceiverClientRatis extends XceiverClientSpi {
-  static final Logger LOG = LoggerFactory.getLogger(XceiverClientRatis.class);
+  public static final Logger LOG =
 
 Review comment:
   ```suggestion
 private static final Logger LOG =
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281678035
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -0,0 +1,344 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.client.io;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.storage.BufferPool;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+
+/**
+ * This class manages the stream entries list and handles block allocation
+ * from OzoneManager.
+ */
+public class BlockOutputStreamEntryPool {
+
+  public static final Logger LOG =
 
 Review comment:
   ```suggestion
 private static final Logger LOG =
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281674267
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -101,9 +103,8 @@ public XceiverClientGrpc(Pipeline pipeline, Configuration 
config) {
 
   /**
* To be used when grpc token is not enabled.
-   * */
-  @Override
-  public void connect() throws Exception {
+   */
+  @Override public void connect() throws Exception {
 
 Review comment:
   What's the purpose of modifying it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281678483
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -83,15 +85,15 @@
* data nodes.
*
* @param pipeline - Pipeline that defines the machines.
-   * @param config -- Ozone Config
+   * @param config   -- Ozone Config
 
 Review comment:
   ```suggestion
  * @param config - Ozone Config
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to validate the response from server in the Read path.

2019-05-07 Thread GitBox
jiwq commented on a change in pull request #793: HDDS-1224. Restructure code to 
validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r281677759
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -0,0 +1,344 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.client.io;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.storage.BufferPool;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+
+/**
+ * This class manages the stream entries list and handles block allocation
+ * from OzoneManager.
+ */
+public class BlockOutputStreamEntryPool {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(BlockOutputStreamEntryPool.class);
+
+  private final List streamEntries;
+  private int currentStreamIndex;
+  private final OzoneManagerProtocol omClient;
+  private final OmKeyArgs keyArgs;
+  private final XceiverClientManager xceiverClientManager;
+  private final int chunkSize;
+  private final String requestID;
+  private final long streamBufferFlushSize;
+  private final long streamBufferMaxSize;
+  private final long watchTimeout;
+  private final long blockSize;
+  private final int bytesPerChecksum;
+  private final ContainerProtos.ChecksumType checksumType;
+  private final BufferPool bufferPool;
+  private OmMultipartCommitUploadPartInfo commitUploadPartInfo;
+  private final long openID;
+  private ExcludeList excludeList;
+
+  @SuppressWarnings("parameternumber")
+  public BlockOutputStreamEntryPool(OzoneManagerProtocol omClient,
+  int chunkSize, String requestId, HddsProtos.ReplicationFactor factor,
+  HddsProtos.ReplicationType type, long bufferFlushSize, long 
bufferMaxSize,
+  long size, long watchTimeout, ContainerProtos.ChecksumType checksumType,
+  int bytesPerChecksum, String uploadID, int partNumber,
+  boolean isMultipart, OmKeyInfo info,
+  XceiverClientManager xceiverClientManager, long openID) {
+streamEntries = new ArrayList<>();
+currentStreamIndex = 0;
+this.omClient = omClient;
+this.keyArgs = new OmKeyArgs.Builder().setVolumeName(info.getVolumeName())
+.setBucketName(info.getBucketName()).setKeyName(info.getKeyName())
+.setType(type).setFactor(factor).setDataSize(info.getDataSize())
+.setIsMultipartKey(isMultipart).setMultipartUploadID(uploadID)
+.setMultipartUploadPartNumber(partNumber).build();
+this.xceiverClientManager = xceiverClientManager;
+this.chunkSize = chunkSize;
+this.requestID = requestId;
+this.streamBufferFlushSize = bufferFlushSize;
+this.streamBufferMaxSize = bufferMaxSize;
+this.blockSize = size;
+this.watchTimeout = watchTimeout;
+this.bytesPerChecksum = bytesPerChecksum;
+this.checksumType = checksumType;
+this.openID = openID;
+this.excludeList = new ExcludeList();
+
+Preconditions.checkState(chunkSize > 0);
+Preconditions.checkState(streamBufferFlushSize > 0);
+Preconditions.checkState(streamBufferMaxSize > 0);
+Preconditions.checkState(blockSize > 0);
+Preconditions.checkState(streamBufferFlushSize % chunkSize == 0);
+Preconditions.checkState(streamBufferMaxSize % streamBufferFlushSize == 0);
+Preconditions.checkState(blockSize % streamBufferMaxSize == 0);
+this.bufferPool =
+new BufferPool(chunkSize, (int) streamBufferMaxSize / chunkSize);
+  }
+
+  public 

[GitHub] [hadoop] elek commented on issue #726: HDDS-1424. Support multi-container robot test execution

2019-05-07 Thread GitBox
elek commented on issue #726: HDDS-1424. Support multi-container robot test 
execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-490125301
 
 
   Thanks @arp7 and @xiaoyuyao the review. I will merge it with the fixed type. 
   
   And this is just the improvement for the framework. As a next step, I would 
like to :
   
1. Remove the intermittency from the acceptance test runs (now it's easier 
as it's very easy to find the report for a specific test). 

2. Fix ozonefs with `hdfs dfs` command and enable the unit test
   
3. Enable test for ozone + mapreduce (now, it should be easy based on the 
README)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834859#comment-16834859
 ] 

Wei-Chiu Chuang commented on HADOOP-16238:
--

+1 from me. Thanks Daryn for that piece of information.
Checking our internal bug report database, this message came up many many times.

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch, HADOOP-16238-005.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-05-07 Thread GitBox
elek commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281683399
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Yes, it is. It is the most basic level check if the compose folder is sill 
usable. (basic test checks only the availability of the webui and do a freon 
test with 5*5*5 keys. Maybe we can decrease the numbers to 1*1*5. (1 vol, 1 
bucket, 1 key). If we can upload 5 keys, it should be fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834839#comment-16834839
 ] 

Hadoop QA commented on HADOOP-16299:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
35s{color} | {color:green} root generated 0 new + 1478 unchanged - 3 fixed = 
1478 total (was 1481) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
|   | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16299 |
| JIRA Patch URL | 

[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-490115698
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 446 | trunk passed |
   | +1 | compile | 225 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 899 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | trunk passed |
   | 0 | spotbugs | 274 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 486 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 201 | the patch passed |
   | +1 | javac | 201 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 733 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 130 | the patch passed |
   | +1 | findbugs | 468 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 157 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1291 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 6050 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.TestContainerReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 0f9e9cedcd18 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 49e1292 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/testReport/ |
   | Max. process+thread count | 4858 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone hadoop-ozone/client hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna edited a comment on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-07 Thread GitBox
noslowerdna edited a comment on issue #796: HADOOP-16294: Enable access to 
input options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-490095902
 
 
   > compile failing
   
   Fixed ( 
https://github.com/noslowerdna/hadoop/commit/0301c8300662a814002542e20912e01946e9307d
 ), I didn't realize that the trunk code had changed from the version I'd been 
working with. We would want subclasses to have access to the `DistCpContext` 
instead since that's what `CopyListing#buildListing` uses now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-07 Thread GitBox
noslowerdna commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-490095902
 
 
   > compile failing
   
   Fixed ( 
https://github.com/noslowerdna/hadoop/commit/0301c8300662a814002542e20912e01946e9307d
 ), I didn't realize that the trunk code had changed from the version I'd been 
working with.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834793#comment-16834793
 ] 

Hadoop QA commented on HADOOP-15989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968051/0002-HADOOP-15989.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9b3cc8da738 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 49e1292 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16232/testReport/ |
| Max. process+thread count | 1388 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16232/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Synchronized at 

[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834787#comment-16834787
 ] 

Hadoop QA commented on HADOOP-16287:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968047/HADOOP-16827-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 8ed5ff0e 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon Dec 
10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 49e1292 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16231/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16231/testReport/ |
| Max. process+thread count | 1497 (vs. ulimit of 1) |
| modules | C: 

[GitHub] [hadoop] bshashikant commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
bshashikant commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-490073641
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
bshashikant opened a new pull request #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-07 Thread GitBox
bshashikant closed pull request #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version

2019-05-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834740#comment-16834740
 ] 

Hadoop QA commented on HADOOP-16270:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
49m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-applications-catalog-docker in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16270 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968045/HADOOP-16270-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux c7e89a7b5b2c 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 49e1292 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16230/testReport/ |
| Max. process+thread count | 316 (vs. ulimit of 1) |
| modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16230/console |
| Powered 

[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834735#comment-16834735
 ] 

Akira Ajisaka commented on HADOOP-16299:


002 patch: I found the way to discard the --add-exports settings by removing 
the direct usages of com.sun.jndi.ldap package. This patch is based on 
https://issues.apache.org/jira/secure/attachment/12951455/HADOOP-15941.1.patch. 
Thanks [~tasanuma0829] for the initial work.

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16299:
---
Attachment: HADOOP-16299.002.patch

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16300) ebugs automated bug checker is reporting exception issues

2019-05-07 Thread eBugs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834723#comment-16834723
 ] 

eBugs commented on HADOOP-16300:


Thanks for considering the bug reports! Sorry for making many of them at once 
(this is my first time to submit reports).

> ebugs automated bug checker is reporting exception issues
> -
>
> Key: HADOOP-16300
> URL: https://issues.apache.org/jira/browse/HADOOP-16300
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: eBugs
>Priority: Major
>
> "ebugs-in-cloud-systems" is reporting issues related to exception handling
> Some of them may be be correct, some of them appear to have misunderstood 
> issues.
> For reference
> * we aren't going to change the signature of public APIs, so new exception 
> classes will not be added. anything which suggests that will be WONTFIX.
> * we probably aren't going to change code which throws unchecked exceptions 
> to checked ones unless there is a good reason.
> * we can and do tighten the exceptions thrown in failures (e.g. replace an 
> IOException with an InterruptedIOException. Patches welcome there, with tests.
> * making sure we don't lose the stack traces of inner causes would be nice 
> too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >