[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make 
ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321585352
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -137,12 +137,22 @@
   private Text dtService;
   private final boolean topologyAwareReadEnabled;
 
+  /**
+   * Creates RpcClient instance with the given configuration.
+   * @param conf Configuration
+   * @throws IOException
+   */
+  public RpcClient(Configuration conf) throws IOException {
 
 Review comment:
   Can we mark this @VisibleForTesting or we can remove this method and use 
other overloaded constructor method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make 
ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321585352
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -137,12 +137,22 @@
   private Text dtService;
   private final boolean topologyAwareReadEnabled;
 
+  /**
+   * Creates RpcClient instance with the given configuration.
+   * @param conf Configuration
+   * @throws IOException
+   */
+  public RpcClient(Configuration conf) throws IOException {
 
 Review comment:
   Can we mark this @VisibleForTesting


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923948#comment-16923948
 ] 

Hadoop QA commented on HADOOP-15565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 57s{color} 
| {color:red} root generated 15 new + 1457 unchanged - 15 fixed = 1472 total 
(was 1472) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
19s{color} | {color:green} root: The patch generated 0 new + 346 unchanged - 3 
fixed = 346 total (was 349) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-15565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979600/HADOOP-15565.0008.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 181e3c92e83a 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / acbea8d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| javac | 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make 
ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321585214
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 ##
 @@ -136,6 +136,31 @@ public static OzoneClient getRpcClient(String omHost, 
Integer omRpcPort,
 return getRpcClient(config);
   }
 
+  /**
+   * Returns an OzoneClient which will use RPC protocol.
+   *
+   * @param omServiceId
+   *Service ID of OzoneManager HA cluster.
+   *
+   * @param config
+   *Configuration to be used for OzoneClient creation
+   *
+   * @return OzoneClient
+   *
+   * @throws IOException
+   */
+  public static OzoneClient getRpcClient(String omServiceId,
 
 Review comment:
   One suggestion, can we have a single getRpcClient method, instead of 
multiple methods. Because in all cases, I think we can call this method if the 
configuration is set with om address in non-HA case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make 
ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321583946
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 // If port number is not specified, read it from config
 omPort = OmUtils.getOmRpcPort(conf);
   }
+} else if (OmUtils.isServiceIdsDefined(conf)) {
+  // When host name or service id is given, and ozone.om.service.ids is
 
 Review comment:
   We arrived at this point means, hostname or service id is not provided in 
url right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make 
ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321583418
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 // If port number is not specified, read it from config
 omPort = OmUtils.getOmRpcPort(conf);
   }
+} else if (OmUtils.isServiceIdsDefined(conf)) {
 
 Review comment:
   Here conf passed might not be ozone configuration object, to get config 
values of ozone-site.xml, we need to convert conf to ozone configuration object.
   
   `OmUtils.isServiceIdsDefined(new OzoneConfiguration(conf))`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-05 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923901#comment-16923901
 ] 

Zhankun Tang edited comment on HADOOP-16551 at 9/6/19 3:30 AM:
---

Am I missing something before running this script? It seems not. I'm following 
[https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease]

Could you please help with this? [~aw]


was (Author: tangzhankun):
Am I missing something before running this script? It seems not. I'm following 
[https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease]

Could you please help with this? @Allen Wittenauer

> The changelog*.md seems not generated when create-release
> -
>
> Key: HADOOP-16551
> URL: https://issues.apache.org/jira/browse/HADOOP-16551
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Blocker
>
> Hi,
>  When creating Hadoop 3.1.3 release with "create-release" script, after the 
> mvn site succeeded. But it complains about this and failed:
> {code:java}
> dev-support/bin/create-release --asfrelease --docker --dockercache{code}
> {code:java}
> $ cd /build/source
> $ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
> /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
> $ cp -p 
> /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
>  /build/source/target/artifacts/CHANGES.md
> cp: cannot stat 
> '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
>  No such file or directory
> {code}
> And there's no 3.1.3 release site markdown folder.
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
> ls: cannot access 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
> file or directory
> {code}
> I've checked the HADOOP-14671 but have no idea why this changelog is not 
> generated.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-05 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923901#comment-16923901
 ] 

Zhankun Tang commented on HADOOP-16551:
---

Am I miss something before running this script? I'm following 
[https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease]


Could you please help with this? @Allen Wittenauer

> The changelog*.md seems not generated when create-release
> -
>
> Key: HADOOP-16551
> URL: https://issues.apache.org/jira/browse/HADOOP-16551
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Blocker
>
> Hi,
>  When creating Hadoop 3.1.3 release with "create-release" script, after the 
> mvn site succeeded. But it complains about this and failed:
> {code:java}
> dev-support/bin/create-release --asfrelease --docker --dockercache{code}
> {code:java}
> $ cd /build/source
> $ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
> /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
> $ cp -p 
> /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
>  /build/source/target/artifacts/CHANGES.md
> cp: cannot stat 
> '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
>  No such file or directory
> {code}
> And there's no 3.1.3 release site markdown folder.
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
> ls: cannot access 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
> file or directory
> {code}
> I've checked the HADOOP-14671 but have no idea why this changelog is not 
> generated.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-05 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923901#comment-16923901
 ] 

Zhankun Tang edited comment on HADOOP-16551 at 9/6/19 3:27 AM:
---

Am I missing something before running this script? It seems not. I'm following 
[https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease]

Could you please help with this? @Allen Wittenauer


was (Author: tangzhankun):
Am I miss something before running this script? I'm following 
[https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease]


Could you please help with this? @Allen Wittenauer

> The changelog*.md seems not generated when create-release
> -
>
> Key: HADOOP-16551
> URL: https://issues.apache.org/jira/browse/HADOOP-16551
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Blocker
>
> Hi,
>  When creating Hadoop 3.1.3 release with "create-release" script, after the 
> mvn site succeeded. But it complains about this and failed:
> {code:java}
> dev-support/bin/create-release --asfrelease --docker --dockercache{code}
> {code:java}
> $ cd /build/source
> $ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
> /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
> $ cp -p 
> /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
>  /build/source/target/artifacts/CHANGES.md
> cp: cannot stat 
> '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
>  No such file or directory
> {code}
> And there's no 3.1.3 release site markdown folder.
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
> ls: cannot access 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
> file or directory
> {code}
> I've checked the HADOOP-14671 but have no idea why this changelog is not 
> generated.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-05 Thread Zhankun Tang (Jira)
Zhankun Tang created HADOOP-16551:
-

 Summary: The changelog*.md seems not generated when create-release
 Key: HADOOP-16551
 URL: https://issues.apache.org/jira/browse/HADOOP-16551
 Project: Hadoop Common
  Issue Type: Task
Reporter: Zhankun Tang


Hi,
 When creating Hadoop 3.1.3 release with "create-release" script, after the mvn 
site succeeded. But it complains about this and failed:

{code:java}
dev-support/bin/create-release --asfrelease --docker --dockercache{code}
{code:java}
$ cd /build/source
$ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
/build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
$ cp -p 
/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
 /build/source/target/artifacts/CHANGES.md
cp: cannot stat 
'/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
 No such file or directory
{code}

And there's no 3.1.3 release site markdown folder.
{code:java}
[ztang@release-vm hadoop]$ ls 
hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
ls: cannot access 
hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
file or directory

{code}
I've checked the HADOOP-14671 but have no idea why this changelog is not 
generated.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using 
symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-528692206
 
 
   Finally, all checks are fine! The integration test failures are unrelated as 
mentioned previously.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1409: HDDS-2087. Remove the hard coded config key in ChunkManager

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1409: HDDS-2087. Remove the hard coded config 
key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-528688453
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 574 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 417 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 610 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 541 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 93 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 693 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 688 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 293 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1816 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7753 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1409 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4316a09b4558 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acbea8d |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/1/testReport/ |
   | Max. process+thread count | 5288 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Tsuyoshi Ozawa (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923879#comment-16923879
 ] 

Tsuyoshi Ozawa commented on HADOOP-13363:
-

[~aajisaka] If we upgrade protobuf version, is any Yetus-side update needed to 
build Hadoop correctly?

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Tsuyoshi Ozawa (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923878#comment-16923878
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-13363 at 9/6/19 2:41 AM:
-

[~vinayakumarb]

> Have you verified this already?

No, I have not. I'm sorry.

[~anu] about the build failure, if I understand correctly, we need to update 
docker file (or a build environment) to include protobuf 3.2.0 too.


was (Author: ozawa):
[~vinayakumarb]

> Have you verified this already?

Sorry, I have no time to do so currently.

[~anu] about the build failure, if I understand correctly, we need to update 
docker file (or a build environment) to include protobuf 3.2.0 too.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Tsuyoshi Ozawa (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923878#comment-16923878
 ] 

Tsuyoshi Ozawa commented on HADOOP-13363:
-

[~vinayakumarb]

> Have you verified this already?

Sorry, I have no time to do so currently.

[~anu] about the build failure, if I understand correctly, we need to update 
docker file (or a build environment) to include protobuf 3.2.0 too.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-15565:
-
Attachment: HADOOP-15565.0008.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, 
> HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923876#comment-16923876
 ] 

Jinglun commented on HADOOP-15565:
--

Hi [~xkrogen], I did a test about the diff 0 issue. When I wget v006 link the 
patch downloaded is actually v007. There must be a bug of Jira. I'll re-upload  
v006 as v006.bak.

Thanks for your advice, very helpful ! Upload patch-008 changing to 
Objects.equals.

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, 
> HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-15565:
-
Attachment: HADOOP-15565.0006.bak

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, 
> HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Sunil Govindan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923856#comment-16923856
 ] 

Sunil Govindan commented on HADOOP-13363:
-

[https://github.com/apache/hadoop/pull/1408/files]

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1409: HDDS-2087. Remove the hard coded config key in ChunkManager

2019-09-05 Thread GitBox
vivekratnavel commented on issue #1409: HDDS-2087. Remove the hard coded config 
key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-528663174
 
 
   @anuengineer @bharatviswa504 @elek @swagle Please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1409: HDDS-2087. Remove the hard coded config key in ChunkManager

2019-09-05 Thread GitBox
vivekratnavel commented on issue #1409: HDDS-2087. Remove the hard coded config 
key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-528663110
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #1409: HDDS-2087. Remove the hard coded config key in ChunkManager

2019-09-05 Thread GitBox
vivekratnavel opened a new pull request #1409: HDDS-2087. Remove the hard coded 
config key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409
 
 
   We have a hard-coded config key in the ChunkManagerFactory.java.
   
   ```
   boolean scrubber = config.getBoolean(
"hdds.containerscrub.enabled",
false);
   ```
   
   This patch fixes the hard coded config key by referring to it from a 
constant.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1408: HADOOP-13363. Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1408: HADOOP-13363. Upgrade protobuf from 
2.5.0 to something newer
URL: https://github.com/apache/hadoop/pull/1408#issuecomment-528660020
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 12 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 31 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1156 | trunk passed |
   | +1 | compile | 1013 | trunk passed |
   | +1 | checkstyle | 227 | trunk passed |
   | +1 | mvnsite | 942 | trunk passed |
   | +1 | shadedclient | 1936 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 818 | trunk passed |
   | 0 | spotbugs | 27 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 28 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   | 0 | findbugs | 30 | 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml) |
   | 0 | findbugs | 27 | branch/hadoop-client-modules/hadoop-client-api no 
findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-common-project in the patch failed. |
   | -1 | mvninstall | 15 | hadoop-common in the patch failed. |
   | -1 | mvninstall | 15 | hadoop-hdfs in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-hdfs-client in the patch failed. |
   | -1 | mvninstall | 14 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | mvninstall | 16 | hadoop-mapreduce-client-common in the patch failed. 
|
   | -1 | mvninstall | 22 | hadoop-mapreduce-client-hs in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-mapreduce-client-shuffle in the patch 
failed. |
   | -1 | mvninstall | 20 | hadoop-fs2img in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-yarn-api in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-yarn-services-core in the patch failed. |
   | -1 | mvninstall | 13 | hadoop-yarn-client in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-yarn-common in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-yarn-server-applicationhistoryservice in the 
patch failed. |
   | -1 | mvninstall | 13 | hadoop-yarn-server-common in the patch failed. |
   | -1 | mvninstall | 12 | hadoop-yarn-server-nodemanager in the patch failed. 
|
   | -1 | mvninstall | 12 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | mvninstall | 13 | hadoop-yarn-server-tests in the patch failed. |
   | -1 | compile | 33 | root in the patch failed. |
   | -1 | javac | 33 | root in the patch failed. |
   | -0 | checkstyle | 239 | root: The patch generated 10 new + 3592 unchanged 
- 2 fixed = 3602 total (was 3594) |
   | -1 | mvnsite | 26 | hadoop-common-project in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-common in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-hdfs in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-hdfs-client in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-mapreduce-client-common in the patch failed. |
   | -1 | mvnsite | 17 | hadoop-mapreduce-client-hs in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-mapreduce-client-shuffle in the patch failed. |
   | -1 | mvnsite | 16 | hadoop-fs2img in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-api in the patch failed. |
   | -1 | mvnsite | 15 | hadoop-yarn-services-core in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-client in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-common in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-server-applicationhistoryservice in the 
patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-server-common in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-server-nodemanager in the patch failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | mvnsite | 14 | hadoop-yarn-server-tests in the patch failed. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 23 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 55 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 21 | hadoop-common-project in the patch failed. |
   | -1 | javadoc | 12 | hadoop-common in the patch failed. |
   | -1 | javadoc | 13 | hadoop-hdfs in the patch failed. |
   | -1 | javadoc | 13 | hadoop-hdfs-client in the patch failed. |
   | -1 | javadoc | 12 | hadoop-hdfs-rbf in the patch failed. |
   | 

[GitHub] [hadoop] shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-05 Thread GitBox
shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r321526544
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.java
 ##
 @@ -71,15 +71,9 @@ public void init(Configuration configuration, RMProxy 
rmProxy,
 
YarnConfiguration.DEFAULT_CLIENT_FAILOVER_RETRIES_ON_SOCKET_TIMEOUTS));
   }
 
-  protected T getProxyInternal() {
 
 Review comment:
   if there is no actual change to this file, let's revert all the changes in 
this file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
arp7 commented on a change in pull request #1360: HDDS-2007. Make ozone fs 
shell command work with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321525072
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 ##
 @@ -136,6 +136,31 @@ public static OzoneClient getRpcClient(String omHost, 
Integer omRpcPort,
 return getRpcClient(config);
   }
 
+  /**
+   * Returns an OzoneClient which will use RPC protocol.
+   *
+   * @param omServiceId
+   *Service ID of OzoneManager HA cluster.
+   *
+   * @param config
+   *Configuration to be used for OzoneClient creation
+   *
+   * @return OzoneClient
+   *
+   * @throws IOException
+   */
+  public static OzoneClient getRpcClient(String omServiceId,
+  Configuration config)
+  throws IOException {
+Preconditions.checkNotNull(omServiceId);
+Preconditions.checkNotNull(config);
+// Override ozone.om.address just in case it is used later.
+// Because if this is not overridden, the (incorrect) value from xml
+// will be used?
+config.set(OZONE_OM_ADDRESS_KEY, omServiceId);
 
 Review comment:
   I didn't understand why this is required. Let's discuss tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923785#comment-16923785
 ] 

Anu Engineer commented on HADOOP-13363:
---

Rebase or incomplete patch?

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923780#comment-16923780
 ] 

Hadoop QA commented on HADOOP-13363:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-13363 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860660/HADOOP-13363.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16519/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13363:
---
Hadoop Flags:   (was: Incompatible change)
Assignee: Vinayakumar B
  Status: Patch Available  (was: Open)

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923768#comment-16923768
 ] 

Vinayakumar B commented on HADOOP-13363:


Created a PR by using the above method suggested by [~stack]. 
https://issues.apache.org/jira/browse/HADOOP-13363?focusedCommentId=15958253=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15958253
 # Creating the separate module with shaded dependency of ugraded protobuf.
 # hadoop-common refers to this shaded dependency
 # Updated hadoop-maven-plugin's "protoc" and "test-protoc" goals to change 
references of protobuf to relocated classes in generated sources.
 # All other existing usages of protobuf, changed the reference to relocated 
classes inside shaded dependency.
 # Still keeping the existing protobuf dependency with 2.5.0 version, to avoid 
impact on downstream.

Verified compiling all modules.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb opened a new pull request #1408: HADOOP-13363. Upgrade protobuf from 2.5.0 to something newer

2019-09-05 Thread GitBox
vinayakumarb opened a new pull request #1408: HADOOP-13363. Upgrade protobuf 
from 2.5.0 to something newer
URL: https://github.com/apache/hadoop/pull/1408
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1406: [HDDS-1708] Add container scrubber metrics

2019-09-05 Thread GitBox
anuengineer closed pull request #1406: [HDDS-1708] Add container scrubber 
metrics
URL: https://github.com/apache/hadoop/pull/1406
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-05 Thread GitBox
xiaoyuyao commented on a change in pull request #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#discussion_r321472474
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
 ##
 @@ -205,6 +205,7 @@ public void 
verifyRaftGroupIdGenerationWithCustomOmServiceId() throws
 .setOMServiceId(customOmServiceId)
 .build();
 // Starts a single node Ratis server
+omRatisServer.stop();
 OzoneManagerRatisServer newOmRatisServer = OzoneManagerRatisServer
 
 Review comment:
   @ChenSammi omRatisServer is started at the begin of each test case and stop 
after that. However in this new test case, it started a different 
OzoneManagerRatisServer named newOmRatisServer, it has to be stopped explicitly 
to avoid leak. Also, the omRatisServer has to be stopped before 
newOmRatisServer is started to avoid metric registration confliction. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-05 Thread GitBox
xiaoyuyao commented on a change in pull request #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#discussion_r321472474
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
 ##
 @@ -205,6 +205,7 @@ public void 
verifyRaftGroupIdGenerationWithCustomOmServiceId() throws
 .setOMServiceId(customOmServiceId)
 .build();
 // Starts a single node Ratis server
+omRatisServer.stop();
 OzoneManagerRatisServer newOmRatisServer = OzoneManagerRatisServer
 
 Review comment:
   @ChenSammi omRatisServer is started at the begin of each test case and stop 
after that. However in this new test case, it started a different 
OzoneManagerRatisServer named newOmRatisServer, it has to be stopped explicitly 
to avoid leak. Also, the omRatisServer has to be stopped to avoid metric 
registration failure. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-528551540
 
 
   tested s3 ireland without s3guard -one failure in test teardown. I have been 
seeing this is a lot recently and suspect it may be throttling. Or just AWS 
performance


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-528550341
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1033 | trunk passed |
   | +1 | compile | 975 | trunk passed |
   | +1 | checkstyle | 146 | trunk passed |
   | +1 | mvnsite | 133 | trunk passed |
   | +1 | shadedclient | 1015 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 114 | trunk passed |
   | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 187 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 922 | the patch passed |
   | +1 | javac | 922 | the patch passed |
   | +1 | checkstyle | 142 | the patch passed |
   | +1 | mvnsite | 129 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 111 | the patch passed |
   | +1 | findbugs | 206 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 517 | hadoop-common in the patch failed. |
   | +1 | unit | 95 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6750 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1407 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 7b0b7fb6dfa7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 511df1e |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/1/testReport/ |
   | Max. process+thread count | 1368 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321443871
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   We have specific functions in the RPCClient.java, and it is one function per 
capability; since we did not want to expose the generic mechanism to the end 
user to just set and get features. It might be that we have not exposed the 
generic feature to set the GDPR like flag to the end user. But when you get to 
that patch, you will see and it might involve adding a signature to RpcClient, 
and probably some code in the OM to handle that call.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321432258
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   @anuengineer I think I did not yet understand the problem statement here. 
Could you please help me understand?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321430004
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,16 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
+if(Boolean.valueOf(metadata.get(OzoneConsts.GDPR_FLAG))){
+  try{
+GDPRSymmetricKey gKey = new GDPRSymmetricKey();
+metadata.putAll(gKey.getKeyDetails());
+  }catch (Exception e) {
+throw new IOException(e);
 
 Review comment:
   @anuengineer thanks for elaborating, @ajayydv thanks for original suggestion.
   I have added an ERROR message before throwing the exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321428987
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -1099,6 +1126,23 @@ private OzoneOutputStream 
createOutputStream(OpenKeySession openKey,
   decrypted.getMaterial(), feInfo.getIV());
   return new OzoneOutputStream(cryptoOut);
 } else {
+  try{
+GDPRSymmetricKey gk;
+Map openKeyMetadata =
+openKey.getKeyInfo().getMetadata();
+if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
+  gk = new GDPRSymmetricKey(
+  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
+  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
+  );
+  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
 
 Review comment:
   @anuengineer Agreed.
   1. Made default to 128 bit (16 char) so that users are not forced to have 
JCE policy jars.
   2. Added the log message with more details in case it hits the 
invalidKeyException
   3. We have HDDS-2059 logged, which would allow users to specify random 
secret, so we can tackle the documentation aspect during that work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
steveloughran opened a new pull request #1407: HADOOP-16490. Improve S3Guard 
handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407
 
 
   This patch avoids issuing any HEAD path request when creating a file with 
overwrite=true, so 404s will not end up in the S3 load balancers unless someone 
calls getFileStatus/exists in their own code
   
   
   Special S3Guard FNFE retry policy independently configurable from other 
retry policies,
   * and use a setting with exponential backoff
   * new config names
   * copy raises a RemoteFileChangedException which is *not* caught in rename() 
and downgraded to false. Thus: when a rename is unrecoverable, this fact is 
propagated
   * tests for this
   * More logging @ debug in change policies as to policy type and when options 
are not set, as well as being set. Currently to work out the policy involves 
looking for the absence of messages, not the presence. It makes the file more 
verbose, but will aid with debugging these problems.
   
   
   Also: tests turning auth mode on/off have to handle the auth state being set 
through an authoritative path, rather than a single flag. Caught me out as of 
course the first test I saw with this was the ITestS3ARemoteFileChanged rename 
ones, and I assumed that it was my new code. It was actually due to me setting 
an auth path last week.
   
   
   Change-Id: I7bb468aca0f4019537d82bc083f0a9887eaa282b


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
steveloughran commented on issue #1229: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-528493922
 
 
   I closed the wrong PR!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1359: HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
steveloughran commented on issue #1359: HADOOP-16430. S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-528494093
 
 
   This is merged. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1359: HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
steveloughran closed pull request #1359: HADOOP-16430. S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre opened a new pull request #1406: [HDDS-1708] Add container scrubber metrics

2019-09-05 Thread GitBox
hgadre opened a new pull request #1406: [HDDS-1708] Add container scrubber 
metrics
URL: https://github.com/apache/hadoop/pull/1406
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321375262
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   @ajayydv  Yes, we might need to add some interface for this. good catch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321374583
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,16 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
+if(Boolean.valueOf(metadata.get(OzoneConsts.GDPR_FLAG))){
 
 Review comment:
   @ajayydv Yes, it can be done; but then the cost of this function will be 
borne by the data node. Today the EC of Hadoop/Ozone pushes that to the client; 
so the data nodes don't see this cost. Also, today we have no mechanism to 
deliver the key to data node; that means even if we lose the HDD disk, the 
person who finds it cannot decode it; since data node never sees the key. 
   
   > From security side that will be more secure as we will not be sharing it 
over the wire on client side.
   
   Completely agree; we will need to make sure that RPC is secure on wire when 
we send this key. But remember; GDPR is not about security; it is about 
forgetting the key when we delete a block; so that we can write and sign a 
document saying that file has been deleted. Since it is not a security feature, 
even if the key is leaked; it is ok; all we are saying -- or promising is that 
via Ozone Manager you cannot get to the key. if someone has made a copy of that 
file; it is a problem, but something that we do not know off.
   
   @dineshchitlangia  has attached a design document here, where he discusses 
some of these issues. https://issues.apache.org/jira/browse/HDDS-2012
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-05 Thread GitBox
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work 
with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-528458580
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321372069
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,16 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
+if(Boolean.valueOf(metadata.get(OzoneConsts.GDPR_FLAG))){
+  try{
+GDPRSymmetricKey gKey = new GDPRSymmetricKey();
+metadata.putAll(gKey.getKeyDetails());
+  }catch (Exception e) {
+throw new IOException(e);
 
 Review comment:
   Yes, And see my comment above. The fact that Illegal key size is the result 
of Java Security policy is not very obvious. If you can add that to your throw 
or log statement, it would be really good. Then the user realizes why it is 
failing. The issue more precisely is that this is client code; we have no 
control over where or on what machines this code is run. It is quite possible 
that when the Ozone cluster is being setup the Admins installed the right 
policy; but some client machine may not have that. If the key has been written 
by a client with 256 bits length, then the new client has no choice but the use 
the same algorithm to decode it. Communicating that issue to the user might 
save some pain for them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321370576
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -1099,6 +1126,23 @@ private OzoneOutputStream 
createOutputStream(OpenKeySession openKey,
   decrypted.getMaterial(), feInfo.getIV());
   return new OzoneOutputStream(cryptoOut);
 } else {
+  try{
+GDPRSymmetricKey gk;
+Map openKeyMetadata =
+openKey.getKeyInfo().getMetadata();
+if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
+  gk = new GDPRSymmetricKey(
+  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
+  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
+  );
+  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
 
 Review comment:
   1. Should we make the key length by default to 128 bit instead of 256? That 
way I don't need to Unlimited Strength. 
   2. When we get this error, say illegal key length, do you want to print out 
this extra information, by looking at the key length. That is, if I have set it 
to 32 bytes, you know it is 256 bits and just inform the user.
   3. We can add this to the documentation and to the documentation of this 
config key.
   
   That way, the initial user experience is smooth; suppose someone does not 
want to download and update Java security classes, it will still work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1359: HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1359: HADOOP-16430. S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-528453043
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1359 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1359 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/8/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321356299
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE,
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationStateEvent.START_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_MAINTENANCE,
+NodeOperationStateEvent.ENTER_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_MAINTENANCE, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   Along with your consideration, do we need an edge called Timeout that leads 
from IN_MAINTENANCE to IN_SERVICE? or do you plan to send in RETURN_TO_SERVICE 
event when there is a timeout? Either works, I was wondering if we should 
capture the time out edge in the state machine at all ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321356555
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -426,18 +432,20 @@ public int getStaleNodeCount() {
* @return dead node count
*/
   public int getDeadNodeCount() {
-return getNodeCount(NodeState.DEAD);
+// TODO - hard coded IN_SERVICE
+return getNodeCount(
+new NodeStatus(NodeOperationalState.IN_SERVICE, NodeState.DEAD));
 
 Review comment:
   Perfect, works well. I saw that later in the code. It is fine for now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321356653
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
 ##
 @@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode.
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+ HddsProtos.NodeState health) {
+this.operationalState = operationalState;
+this.health = health;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.HEALTHY);
 
 Review comment:
   Cool.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321355379
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   makes sense, I also do this quite often; let us see what sticks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923545#comment-16923545
 ] 

Erik Krogen commented on HADOOP-15565:
--

Thanks for the detailed explanations [~LiJinglun]! They are very helpful; it 
all makes sense now.

It looks like the v007 patch is identical to the v006 patch – did you upload 
the wrong file?
{code:java}
± diff HADOOP-15565.0006.patch HADOOP-15565.0006.patch | wc -l
   0
{code}
While I'm at it, there is one more thing I noticed: In 
{{TestChRootedFileSystem#getChildFileSystem()}}, can we use 
[{{Objects.equals}}|https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#equals-java.lang.Object-java.lang.Object-]
 instead of manually doing a null + equality check? I think it should make this 
cleaner.

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923545#comment-16923545
 ] 

Erik Krogen edited comment on HADOOP-15565 at 9/5/19 3:41 PM:
--

Thanks for the detailed explanations [~LiJinglun]! They are very helpful; it 
all makes sense now.

It looks like the v007 patch is identical to the v006 patch – did you upload 
the wrong file?
{code:java}
± diff HADOOP-15565.0006.patch HADOOP-15565.0007.patch | wc -l
   0
{code}
While I'm at it, there is one more thing I noticed: In 
{{TestChRootedFileSystem#getChildFileSystem()}}, can we use 
[{{Objects.equals}}|https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#equals-java.lang.Object-java.lang.Object-]
 instead of manually doing a null + equality check? I think it should make this 
cleaner.


was (Author: xkrogen):
Thanks for the detailed explanations [~LiJinglun]! They are very helpful; it 
all makes sense now.

It looks like the v007 patch is identical to the v006 patch – did you upload 
the wrong file?
{code:java}
± diff HADOOP-15565.0006.patch HADOOP-15565.0006.patch | wc -l
   0
{code}
While I'm at it, there is one more thing I noticed: In 
{{TestChRootedFileSystem#getChildFileSystem()}}, can we use 
[{{Objects.equals}}|https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#equals-java.lang.Object-java.lang.Object-]
 instead of manually doing a null + equality check? I think it should make this 
cleaner.

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as 
QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#issuecomment-528430441
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 568 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 538 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 632 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 297 | hadoop-hdds in the patch passed. |
   | -1 | unit | 183 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6033 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1401 |
   | JIRA Issue | HDDS-1561 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5b06fab0ad6b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 511df1e |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/3/testReport/ |
   | Max. process+thread count | 1238 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923532#comment-16923532
 ] 

Hadoop QA commented on HADOOP-15565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 52s{color} 
| {color:red} root generated 15 new + 1457 unchanged - 15 fixed = 1472 total 
(was 1472) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} root: The patch generated 0 new + 345 unchanged - 3 
fixed = 345 total (was 348) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-15565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979558/HADOOP-15565.0007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9c8c0fa73c46 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 172bcd8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java 

[jira] [Commented] (HADOOP-16531) Log more detail for slow RPC

2019-09-05 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923512#comment-16923512
 ] 

Erik Krogen commented on HADOOP-16531:
--

Thanks [~zhangchen]! This seems like a nice improvement and great leverage of 
the {{ProcessingDetails}} we added in HADOOP-16266. +1 from me. I'll give 
others some time to look and commit tomorrow morning PDT.

> Log more detail for slow RPC
> 
>
> Key: HADOOP-16531
> URL: https://issues.apache.org/jira/browse/HADOOP-16531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HADOOP-16531.001.patch
>
>
> Current implementation only log process time
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   LOG.warn("Slow RPC : {} took {} {} to process from client {}",
>   methodName, processingTime, RpcMetrics.TIMEUNIT, call);
>   rpcMetrics.incrSlowRpc();
> }
> {code}
> We need to log more details to help us locate the problem (eg. how long it 
> take to request lock, holding lock, or do other things)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16550) Spark config name error on the Launching Applications Using Docker Containers page

2019-09-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16550:

Description: 
On the "Launching Applications Using Docker Containers" page at the "Example: 
Spark" section the Spark config for configuring the environment variables for 
the application master the config prefix are wrong:
- 
spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  

The correct ones:
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE

See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:

{quote}
spark.yarn.appMasterEnv.[EnvironmentVariableName]
{quote}


  was:
On the "Launching Applications Using Docker Containers" page at the "Example: 
Spark" section the Spark config for configuring the environment variables for 
the application master the config prefix are wrong:
- 
spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- park.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  

The correct ones:
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE

See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:

{quote}
spark.yarn.appMasterEnv.[EnvironmentVariableName]
{quote}



> Spark config name error on the Launching Applications Using Docker Containers 
> page
> --
>
> Key: HADOOP-16550
> URL: https://issues.apache.org/jira/browse/HADOOP-16550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 
> 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Attila Zsolt Piros
>Priority: Major
>
> On the "Launching Applications Using Docker Containers" page at the "Example: 
> Spark" section the Spark config for configuring the environment variables for 
> the application master the config prefix are wrong:
> - 
> spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE 
>  
> The correct ones:
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE
> See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:
> {quote}
> spark.yarn.appMasterEnv.[EnvironmentVariableName]
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16550) Spark config name error on the Launching Applications Using Docker Containers page

2019-09-05 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923501#comment-16923501
 ] 

Gabor Bota commented on HADOOP-16550:
-

Sure, thanks for the contribution [~attilapiros].
LGTM, +1. on the PR

> Spark config name error on the Launching Applications Using Docker Containers 
> page
> --
>
> Key: HADOOP-16550
> URL: https://issues.apache.org/jira/browse/HADOOP-16550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 
> 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Attila Zsolt Piros
>Priority: Major
>
> On the "Launching Applications Using Docker Containers" page at the "Example: 
> Spark" section the Spark config for configuring the environment variables for 
> the application master the config prefix are wrong:
> - 
> spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - park.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  
> The correct ones:
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE
> See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:
> {quote}
> spark.yarn.appMasterEnv.[EnvironmentVariableName]
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1229: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-528390584
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1020 | trunk passed |
   | +1 | compile | 982 | trunk passed |
   | +1 | checkstyle | 140 | trunk passed |
   | +1 | mvnsite | 133 | trunk passed |
   | +1 | shadedclient | 991 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 112 | trunk passed |
   | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 183 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 934 | the patch passed |
   | +1 | javac | 934 | the patch passed |
   | +1 | checkstyle | 149 | root: The patch generated 0 new + 97 unchanged - 2 
fixed = 97 total (was 99) |
   | +1 | mvnsite | 129 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 112 | the patch passed |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 550 | hadoop-common in the patch passed. |
   | +1 | unit | 94 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6724 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 9a81c091b006 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 172bcd8 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/19/testReport/ |
   | Max. process+thread count | 1399 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
steveloughran closed pull request #1229: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923434#comment-16923434
 ] 

Steve Loughran commented on HADOOP-16430:
-

committed -thanks for the review. Yes, I like a lot of the GH review mechanism, 
though I still also like the ability to do a quick summary. Sometimes I feel GH 
is more optimised for reviewing details than the overall patch. And once you 
start rebasing longer-lived patches, things get tricky.

Have a play with the vs.code github integration if you haven't -you can review 
the patch with all the code locally checked out, adding comments in the IDE. 
This is very slick, especially for acting on people's suggestions.

> S3AFilesystem.delete to incrementally update s3guard with deletions
> ---
>
> Key: HADOOP-16430
> URL: https://issues.apache.org/jira/browse/HADOOP-16430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: Screenshot 2019-07-16 at 22.08.31.png
>
>
> Currently S3AFilesystem.delete() only updates the delete at the end of a 
> paged delete operation. This makes it slow when there are many thousands of 
> files to delete ,and increases the window of vulnerability to failures
> Preferred
> * after every bulk DELETE call is issued to S3, queue the (async) delete of 
> all entries in that post.
> * at the end of the delete, await the completion of these operations.
> * inside S3AFS, also do the delete across threads, so that different HTTPS 
> connections can be used.
> This should maximise DDB throughput against tables which aren't IO limited.
> When executed against small IOP limited tables, the parallel DDB DELETE 
> batches will trigger a lot of throttling events; we should make sure these 
> aren't going to trigger failures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-05 Thread GitBox
steveloughran commented on issue #1229: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-528363480
 
 
   committed to trunk -thanks for all reviews
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923431#comment-16923431
 ] 

Hudson commented on HADOOP-16430:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17231 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17231/])
HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with 
(stevel: rev 511df1e837b19ccb9271520589452d82d50ac69d)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ExecutingStoreOperation.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMetadataPersistenceException.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestPartialRenamesDeletes.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/InternalIterators.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DelayedUpdateRenameTracker.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFailureHandling.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/InternalSelectConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/MultiObjectDeleteSupport.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ALocatedFileStatus.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractITestS3AMetadataStoreScale.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRootDir.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/ProgressiveRenameTracker.java


> 

[GitHub] [hadoop] steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-528356582
 
 
   Thanks for the vote, merging in. Your reviews are always valued, and when 
you feel the urge to start coding again,...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
steveloughran commented on a change in pull request #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#discussion_r321248863
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java
 ##
 @@ -207,7 +211,7 @@ public DeleteOperation(final StoreContext context,
 "page size out of range: %d", pageSize);
 this.pageSize = pageSize;
 metadataStore = context.getMetadataStore();
-executor = context.createThrottledExecutor(2);
+executor = context.createThrottledExecutor(1);
 
 Review comment:
   yeah, for now. It means that the delete and list can go in parallel, without 
having to deal with the complexity of multiple parallel deletes and failures 
within them. It's the error handling which scared me. And with the same pool of 
connections to Dynamo, you wouldn't automatically get a speed up. As the 
Javadoc says: do more experimentation here -but do it on EC2, so that the 
answers are valid.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-05 Thread GitBox
steveloughran commented on a change in pull request #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#discussion_r321247382
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java
 ##
 @@ -418,7 +419,7 @@ public void testDeleteDirectory() throws IOException {
 
   @Test
   public void testDeleteNonExistingDirectory() throws IOException {
-String testDirName = "testFile";
+String testDirName = "testDeleteNonExistingDirectory";
 
 Review comment:
junit test method's rule does this, and when you parameterize it you get 
the extended name including parameters, so you automatically get isolation. 
But, you'd better make sure all those params form valid paths, and not have, 
say : or / in them. I didn't do the fixup here as this was a emergency fixup, 
not rework.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1375: HDDS-2048: State check during container state transition in datanode should be lock protected

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1375: HDDS-2048: State check during container 
state transition in datanode should be lock protected
URL: https://github.com/apache/hadoop/pull/1375#issuecomment-528352725
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 573 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 914 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 435 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 642 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 532 | the patch passed |
   | +1 | compile | 398 | the patch passed |
   | +1 | javac | 398 | the patch passed |
   | +1 | checkstyle | 94 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 663 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 300 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1955 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7825 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1375 |
   | JIRA Issue | HDDS-2048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab2948373bb3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 172bcd8 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/4/testReport/ |
   | Max. process+thread count | 4752 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #1326: HDDS-1898. GrpcReplicationService#download cannot replicate the container.

2019-09-05 Thread GitBox
nandakumar131 merged pull request #1326: HDDS-1898. 
GrpcReplicationService#download cannot replicate the container.
URL: https://github.com/apache/hadoop/pull/1326
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1326: HDDS-1898. GrpcReplicationService#download cannot replicate the container.

2019-09-05 Thread GitBox
nandakumar131 commented on issue #1326: HDDS-1898. 
GrpcReplicationService#download cannot replicate the container.
URL: https://github.com/apache/hadoop/pull/1326#issuecomment-528345055
 
 
   Failures are not related to this patch. Tested them locally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics

2019-09-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923358#comment-16923358
 ] 

Steve Loughran commented on HADOOP-13551:
-

# the way to do this is by passing in a RequestMetricCollector to the s3 client 
constructor; this will be invoked before/after each call and can collect and 
publish metrics
# collected metrics are in com.amazonaws.util.AWSRequestMetrics
# this does include throttling retries as well as performance on operations 
(including time to request a signature and sign requests)
# and SDK wide state like https pool capacity

Presumably the way to deal with pool capacity is to have a value which is 
updated on every response; it will jitter a lot when there are many requests 
being made.
* Timing values for common setup/sign operations independent of data size could 
go to a quantile
* we currently only count bytes PUT, some of the input stream values
* NOT bytes copied
* and input stream bytes read are mapped to wrong statistic,   
STREAM_SEEK_BYTES_READ [("stream_bytes_read",
  "Count of bytes read during seek() in stream operations"),

Proposed: we collect this stuff. serve up for apps like Impala to collect



> hook up AwsSdkMetrics to hadoop metrics
> ---
>
> Key: HADOOP-13551
> URL: https://issues.apache.org/jira/browse/HADOOP-13551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Major
>
> There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to 
> the internal metrics of the AWS libraries. We might want to get at those



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as 
QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#issuecomment-528333132
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 580 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 554 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | -1 | findbugs | 210 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 296 | hadoop-hdds in the patch passed. |
   | -1 | unit | 187 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6106 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Switch statement found in 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(SCMCommand,
 OzoneContainer, StateContext, SCMConnectionManager) where one case falls 
through to the next case  At CloseContainerCommandHandler.java:OzoneContainer, 
StateContext, SCMConnectionManager) where one case falls through to the next 
case  At CloseContainerCommandHandler.java:[lines 92-95] |
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1401 |
   | JIRA Issue | HDDS-1561 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux ece162389836 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 172bcd8 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/testReport/ |
   | Max. process+thread count | 1205 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-05 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923338#comment-16923338
 ] 

Hadoop QA commented on HADOOP-16549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
25s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979537/HADOOP-16549.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1ce096832401 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh 

[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-15565:
-
Attachment: HADOOP-15565.0007.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-05 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923308#comment-16923308
 ] 

Jinglun commented on HADOOP-15565:
--

Hi [~xkrogen], thanks very much for your nice review !  I followed all the 
suggestions and upload patch- 007.
{quote}Can you explain why the changes in 
{{TestViewFileSystemDelegationTokenSupport}} are necessary? Same for 
{{TestViewFileSystemDelegation}} – it seems like the old way of returning the 
created {{fs}} was cleaner?
{quote}
Good question! I change the unit tests because: Before we add the cache, all 
the children filesystems are cached in FileSystem.CACHE. So the filesystem 
instance returned by setupMockFileSystem() is exactly the child filesystem of 
viewFs. After adding the ViewFileSystem.InnerCache, viewFs's children 
filesystem instances are no longer cached in FileSystem.CACHE, so we can't set 
fs1 and fs2 to the FileSystem instance returned by setupMockFileSystem().
{quote}I also don't understand the need for changes in {{testSanity()}} – does 
the string comparison no longer work?
{quote}
About testSanity(), after changing to 
{code:java}
fs1 = (FakeFileSystem) getChildFileSystem((ViewFileSystem) viewFs, new 
URI("fs1:/"));{code}
fs1.getUri() will have a path which is set by ViewFileSystem.InnerCache.get(URI 
uri, Configuratioin config). So comparing the URI.toString() doesn't work any 
more. And I change to compare the scheme and authority.
{quote}Can you describe why the changes in {{TestViewFsDefaultValue}} are 
necessary?
{quote}
Good question! I did the changes because: In 
TestViewFsDefaultValue.clusterSetupAtBegining(), there are 2 Configuration 
instances *CONF* and *conf*. For key _DFS_REPLICATION_KEY_, *CONF* is set to 
_DFS_REPLICATION_DEFAULT + 1_ while *conf* is the default value. Before we have 
the InnerCache, the child filesystem instance of vfs is got from 
FileSystem.CACHE which is constructed with *CONF*. After the InnerCache the 
child filesystem instance is got with *conf*. In the test case 
testGetDefaultReplication(), the default replication is got from the child 
FileSystem instance. When using *CONF* it will be _DFS_REPLICATION_DEFAULT + 1_ 
and when using *conf* it will be _DFS_REPLICATION_DEFAULT_. Because 
testGetDefaultReplication() tests the default replication of the mount point 
path, I set __ to *conf* to 
make it work_._

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321204004
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -185,7 +190,7 @@ public int getNodeCount(NodeState nodestate) {
   @Override
   public NodeState getNodeState(DatanodeDetails datanodeDetails) {
 
 Review comment:
   Yes, the 'external interface' of SCMNodeManager will need to change but I 
want to get these changes to be good internally before we push them up the 
stack.
   
   Thanks for taking the time to review this WIP. Glad to hear this is going in 
the correct direction so I will look to tidy things up and then we can consider 
the next step.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321203027
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -151,7 +152,9 @@ private void unregisterMXBean() {
*/
   @Override
   public List getNodes(NodeState nodestate) {
-return nodeStateManager.getNodes(nodestate).stream()
+return nodeStateManager.getNodes(
+new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE, nodestate))
+.stream()
 .map(node -> (DatanodeDetails)node).collect(Collectors.toList());
   }
 
 Review comment:
   Yea I need to fix the query function. I can imagine we will need things like 
all IN_MAINT nodes (ignoring healthy, dead etc) or all dead (ignore op state). 
Right now that is not possible to query until I figure out how to enhance the 
interface.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321201465
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
 ##
 @@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode.
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+ HddsProtos.NodeState health) {
+this.operationalState = operationalState;
+this.health = health;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.HEALTHY);
+  }
+
+  public static NodeStatus inServiceStale() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.STALE);
+  }
+
+  public static NodeStatus inServiceDead() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.DEAD);
+  }
+
 
 Review comment:
   Yes. I got tired of typing the whole new NodeStatus(...) and decided to try 
adding the static methods. It definitely makes the code cleaner, but the cross 
product worries me. At the moment its only 5 * 3 = 15 states, but what if we 
add a 3rd status or a couple more states. The number of helper methods will get 
out of control. We can see how it develops I guess.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-528314940
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 581 | trunk passed |
   | +1 | compile | 376 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 611 | trunk passed |
   | -0 | patch | 475 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 538 | the patch passed |
   | +1 | compile | 391 | the patch passed |
   | +1 | cc | 391 | the patch passed |
   | +1 | javac | 391 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2556 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 8415 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 8b805e111dba 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 172bcd8 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/8/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/8/testReport/ |
   | Max. process+thread count | 5406 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321199279
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
 ##
 @@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode.
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+ HddsProtos.NodeState health) {
+this.operationalState = operationalState;
+this.health = health;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.HEALTHY);
 
 Review comment:
   Yea, we could optimize this and always return the same object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321198662
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -578,39 +587,33 @@ private void checkNodesHealth() {
 Predicate deadNodeCondition =
 (lastHbTime) -> lastHbTime < staleNodeDeadline;
 try {
-  for (NodeState state : NodeState.values()) {
-List nodes = nodeStateMap.getNodes(state);
-for (UUID id : nodes) {
-  DatanodeInfo node = nodeStateMap.getNodeInfo(id);
-  switch (state) {
-  case HEALTHY:
-// Move the node to STALE if the last heartbeat time is less than
-// configured stale-node interval.
-updateNodeState(node, staleNodeCondition, state,
-  NodeLifeCycleEvent.TIMEOUT);
-break;
-  case STALE:
-// Move the node to DEAD if the last heartbeat time is less than
-// configured dead-node interval.
-updateNodeState(node, deadNodeCondition, state,
-NodeLifeCycleEvent.TIMEOUT);
-// Restore the node if we have received heartbeat before configured
-// stale-node interval.
-updateNodeState(node, healthyNodeCondition, state,
-NodeLifeCycleEvent.RESTORE);
-break;
-  case DEAD:
-// Resurrect the node if we have received heartbeat before
-// configured stale-node interval.
-updateNodeState(node, healthyNodeCondition, state,
-NodeLifeCycleEvent.RESURRECT);
-break;
-// We don't do anything for DECOMMISSIONING and DECOMMISSIONED in
-// heartbeat processing.
-  case DECOMMISSIONING:
-  case DECOMMISSIONED:
-  default:
-  }
+  for(DatanodeInfo node : nodeStateMap.getAllDatanodeInfos()) {
+NodeState state =
+nodeStateMap.getNodeStatus(node.getUuid()).getHealth();
+switch (state) {
+case HEALTHY:
+  // Move the node to STALE if the last heartbeat time is less than
+  // configured stale-node interval.
+  updateNodeState(node, staleNodeCondition, state,
+  NodeLifeCycleEvent.TIMEOUT);
+  break;
+case STALE:
+  // Move the node to DEAD if the last heartbeat time is less than
+  // configured dead-node interval.
+  updateNodeState(node, deadNodeCondition, state,
+  NodeLifeCycleEvent.TIMEOUT);
+  // Restore the node if we have received heartbeat before configured
+  // stale-node interval.
+  updateNodeState(node, healthyNodeCondition, state,
+  NodeLifeCycleEvent.RESTORE);
+  break;
+case DEAD:
+  // Resurrect the node if we have received heartbeat before
+  // configured stale-node interval.
+  updateNodeState(node, healthyNodeCondition, state,
+  NodeLifeCycleEvent.RESURRECT);
+  break;
+default:
 }
 
 Review comment:
   This loop didn't need to change for this change, but it seemed to be a 
double loop when it didn't really need to be, and was doing extra lookups from 
the NodeStateMap, so this makes it cleaner to read and slightly more efficient 
too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321197327
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -426,18 +432,20 @@ public int getStaleNodeCount() {
* @return dead node count
*/
   public int getDeadNodeCount() {
-return getNodeCount(NodeState.DEAD);
+// TODO - hard coded IN_SERVICE
+return getNodeCount(
+new NodeStatus(NodeOperationalState.IN_SERVICE, NodeState.DEAD));
 
 Review comment:
   There are a bunch of places where I have hardcoded IN_SERVICE, so once we 
get this working we will need different events for DECOM / IN_MAINT + DEAD, as 
that is an expected state rather than an error condition as it would be now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1375: HDDS-2048: State check during container state transition in datanode should be lock protected

2019-09-05 Thread GitBox
lokeshj1703 commented on issue #1375: HDDS-2048: State check during container 
state transition in datanode should be lock protected
URL: https://github.com/apache/hadoop/pull/1375#issuecomment-528309534
 
 
   @nandakumar131 Thanks for reviewing the PR! I have updated changes as per 
offline discussion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321192741
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE,
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationStateEvent.START_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_MAINTENANCE,
+NodeOperationStateEvent.ENTER_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_MAINTENANCE, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   I hadn't considered where to store that as yet. Probably it will be outside 
of the state machine, but need to consider where it fits in. Perhaps in 
NodeStatus, but that would change that object from being immutable, to carrying 
a time. 
   
   We will need some sort of decommission / maintenance mode monitor, probably 
separate from the heartbeat monitor. The decomm monitor will need to check when 
all blocks are replicated etc, so it could also keep track of the node 
maintenance timeout and hence switch the node to 'IN_SERVICE + DEAD" if it is 
dead and the timeout expires.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-05 Thread GitBox
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321191360
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   I have not yet considered what should happen. First stage is to get the 
states in and make sure nothing breaks, then figure out how to use them :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16550) Spark config name error on the Launching Applications Using Docker Containers page

2019-09-05 Thread Attila Zsolt Piros (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923275#comment-16923275
 ] 

Attila Zsolt Piros commented on HADOOP-16550:
-

[~gabor.bota] could you please take a look to this issue and PR?

> Spark config name error on the Launching Applications Using Docker Containers 
> page
> --
>
> Key: HADOOP-16550
> URL: https://issues.apache.org/jira/browse/HADOOP-16550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 
> 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Attila Zsolt Piros
>Priority: Major
>
> On the "Launching Applications Using Docker Containers" page at the "Example: 
> Spark" section the Spark config for configuring the environment variables for 
> the application master the config prefix are wrong:
> - 
> spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - park.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  
> The correct ones:
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE
> See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:
> {quote}
> spark.yarn.appMasterEnv.[EnvironmentVariableName]
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16550) Spark config name error on the Launching Applications Using Docker Containers page

2019-09-05 Thread Attila Zsolt Piros (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Zsolt Piros updated HADOOP-16550:

Summary: Spark config name error on the Launching Applications Using Docker 
Containers page  (was: Wrong Spark config name on the "Launching Applications 
Using Docker Containers" page)

> Spark config name error on the Launching Applications Using Docker Containers 
> page
> --
>
> Key: HADOOP-16550
> URL: https://issues.apache.org/jira/browse/HADOOP-16550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 
> 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Attila Zsolt Piros
>Priority: Major
>
> On the "Launching Applications Using Docker Containers" page at the "Example: 
> Spark" section the Spark config for configuring the environment variables for 
> the application master the config prefix are wrong:
> - 
> spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - park.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  
> The correct ones:
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
> - spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE
> See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:
> {quote}
> spark.yarn.appMasterEnv.[EnvironmentVariableName]
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16550) Wrong Spark config name on the "Launching Applications Using Docker Containers" page

2019-09-05 Thread Attila Zsolt Piros (Jira)
Attila Zsolt Piros created HADOOP-16550:
---

 Summary: Wrong Spark config name on the "Launching Applications 
Using Docker Containers" page
 Key: HADOOP-16550
 URL: https://issues.apache.org/jira/browse/HADOOP-16550
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.1.2, 2.8.5, 3.0.3, 2.9.2, 3.1.1, 3.0.2, 2.8.4, 3.0.1, 
2.9.1, 3.1.0, 3.0.0, 2.8.3, 2.8.2, 2.9.0
Reporter: Attila Zsolt Piros


On the "Launching Applications Using Docker Containers" page at the "Example: 
Spark" section the Spark config for configuring the environment variables for 
the application master the config prefix are wrong:
- 
spark.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- park.yarn.{color:#DE350B}*A*{color}ppMasterEnv.YARN_CONTAINER_RUNTIME_TYPE  

The correct ones:
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE
- spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE

See https://spark.apache.org/docs/2.4.0/running-on-yarn.html:

{quote}
spark.yarn.appMasterEnv.[EnvironmentVariableName]
{quote}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] echohlne opened a new pull request #1405: In yarn2 UI, ff the Spark task we submitted is still running, the finishedTime returned by the server is 0, and the time shown on the UI2 pa

2019-09-05 Thread GitBox
echohlne opened a new pull request #1405: In yarn2 UI, ff the Spark task we 
submitted is still running,  the finishedTime returned by the server is 0, and 
the time shown on the UI2 page is always '1970/01/01 08:00'
URL: https://github.com/apache/hadoop/pull/1405
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-05 Thread GitBox
lokeshj1703 commented on a change in pull request #1401: HDDS-1561: Mark OPEN 
containers as QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#discussion_r321185090
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -800,6 +805,24 @@ public void notifyLogFailed(Throwable t, LogEntryProto 
failedEntry) {
 return future;
   }
 
+  @Override
+  public void notifyGroupRemove() {
+ratisServer.notifyGroupRemove(gid);
+// Make best effort to quasi-close all the containers on group removal.
+// Containers already in terminal state like CLOSED or UNHEALTHY will not
+// be affected.
+for (Long cid : createContainerSet) {
+  try {
+containerController.markContainerForClose(cid);
+  } catch (IOException e) {
+  }
+  try {
+containerController.quasiCloseContainer(cid);
+  } catch (IOException e) {
+  }
+}
+  }
+
 
 Review comment:
   addressed in 2nd comit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-05 Thread GitBox
lokeshj1703 commented on a change in pull request #1401: HDDS-1561: Mark OPEN 
containers as QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#discussion_r321185076
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
 ##
 @@ -111,10 +111,13 @@ public void handle(SCMCommand command, OzoneContainer 
ozoneContainer,
 return;
   }
   // If we reach here, there is no active pipeline for this container.
-  if (!closeCommand.getForce()) {
-// QUASI_CLOSE the container.
-controller.quasiCloseContainer(containerId);
-  } else {
+  if (container.getContainerState() == ContainerProtos.ContainerDataProto
+  .State.OPEN || container.getContainerState() ==
+  ContainerProtos.ContainerDataProto.State.CLOSING) {
+// Container should not exist in OPEN or CLOSING state without a
+// pipeline.
+controller.markContainerUnhealthy(containerId);
+  } else if (closeCommand.getForce()) {
 // SCM told us to force close the container.
 controller.closeContainer(containerId);
   }
 
 Review comment:
   addressed in 2nd comit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-05 Thread GitBox
lokeshj1703 commented on issue #1401: HDDS-1561: Mark OPEN containers as 
QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#issuecomment-528301702
 
 
   @nandakumar131 Thanks for reviewing the PR! 2nd commit addresses checkstyle 
issues and review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur opened a new pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2019-09-05 Thread GitBox
mukund-thakur opened a new pull request #1404: HDFS-13660 Copy file till the 
source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] echohlne closed pull request #1383: In YARN ui2 applications tab, If finishTime is 0, 'NA' should be displayed at this time, indicating that the task is not finished yet

2019-09-05 Thread GitBox
echohlne closed pull request #1383: In YARN ui2 applications tab, If finishTime 
is 0, 'NA' should be displayed at this time, indicating that the task is not 
finished yet
URL: https://github.com/apache/hadoop/pull/1383
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-05 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923247#comment-16923247
 ] 

Akira Ajisaka commented on HADOOP-16549:


LGTM, +1 pending Jenkins.

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-05 Thread GitBox
hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-528287642
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 720 | trunk passed |
   | +1 | compile | 422 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1086 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 208 | trunk passed |
   | 0 | spotbugs | 436 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 702 | trunk passed |
   | -0 | patch | 478 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 540 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | cc | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 654 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2130 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8527 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 71935b58ecf5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f347c34 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/6/testReport/ |
   | Max. process+thread count | 4726 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-05 Thread Daisuke Kobayashi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923211#comment-16923211
 ] 

Daisuke Kobayashi edited comment on HADOOP-16549 at 9/5/19 8:53 AM:


Thank you! Uploaded a quick patch.


was (Author: daisuke.kobayashi):
Uploaded a quick patch.

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-05 Thread Daisuke Kobayashi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisuke Kobayashi updated HADOOP-16549:
---
Attachment: HADOOP-16549.001.patch
Status: Patch Available  (was: Open)

Uploaded a quick patch.

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >