[jira] [Updated] (HADOOP-15914) hadoop jar command has no help argument

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15914:
-
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-3.2 branch-3.1

Thanks [~adam.antal] for the patch and [~templedf] for the review.

> hadoop jar command has no help argument
> ---
>
> Key: HADOOP-15914
> URL: https://issues.apache.org/jira/browse/HADOOP-15914
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15914.000.patch
>
>
> {{hadoop jar --help}} and {{hadoop jar help}} commands show outputs like this:
> {noformat}
> WARNING: Use "yarn jar" to launch YARN applications.
> JAR does not exist or is not a normal file: /root/--help
> {noformat}
> Only if called with no arguments: {{hadoop jar}} we get the usage text, but 
> even in that case we get:
> {noformat}
> WARNING: Use "yarn jar" to launch YARN applications.
> RunJar jarFile [mainClass] args...
> {noformat}
> Where RunJar is wrapped by the hadoop script (so it should not be displayed).
> {{hadoop --help}} displays the following:
> {noformat}
> jar  run a jar file. NOTE: please use "yarn jar" to launch YARN 
> applications, not this command.
> {noformat}
> which is fine, but {{CommandsManual.md}} tells a bit more information about 
> the usage of this command:
> {noformat}
> Usage: hadoop jar  [mainClass] args...
> {noformat}
> My suggestion is to add a {{--help}} option to the {{hadoop jar}} command 
> that would display this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15958:
-
Priority: Blocker  (was: Critical)

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866259#comment-16866259
 ] 

Wei-Chiu Chuang commented on HADOOP-15958:
--

Thanks for doing the work. I'm setting this to blocker for Hadoop 3.3.0 so we 
can't ignore this work.

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9157) Better option for curl in hadoop-auth-examples

2019-06-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866237#comment-16866237
 ] 

Hudson commented on HADOOP-9157:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16769 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16769/])
HADOOP-9157. Better option for curl in hadoop-auth-examples. Contributed 
(weichiu: rev f1c239c6a4c26e9057373b9b9400e54083290f65)
* (edit) hadoop-common-project/hadoop-auth/src/site/markdown/Examples.md


> Better option for curl in hadoop-auth-examples
> --
>
> Key: HADOOP-9157
> URL: https://issues.apache.org/jira/browse/HADOOP-9157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
> Environment: Ubuntu 12.04
>Reporter: Jingguo Yao
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-9157.01.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is 
> "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to 
> use "-u :" instead of "-u foo". With the use of the former option, curl will 
> not prompt for a password.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9157) Better option for curl in hadoop-auth-examples

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9157:

   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

+1 Thanks [~boky01] [~yaojingguo] pushed to trunk, branch-3.2 and branch-3.1

> Better option for curl in hadoop-auth-examples
> --
>
> Key: HADOOP-9157
> URL: https://issues.apache.org/jira/browse/HADOOP-9157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
> Environment: Ubuntu 12.04
>Reporter: Jingguo Yao
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-9157.01.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is 
> "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to 
> use "-u :" instead of "-u foo". With the use of the former option, curl will 
> not prompt for a password.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer 
should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#issuecomment-502942062
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 86 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1155 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 19 unchanged - 0 fixed = 20 total (was 19) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 819 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3616 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/983 |
   | JIRA Issue | HADOOP-16379 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f1310bc0ac26 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62ad988 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16379) S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866215#comment-16866215
 ] 

Hadoop QA commented on HADOOP-16379:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/983 |
| JIRA Issue | HADOOP-16379 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f1310bc0ac26 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 62ad988 |
| Default Java | 1.8.0_212 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/1/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 5500) 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294601234
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
 ##
 @@ -70,13 +70,23 @@ Integer getCountForForContainerKeyPrefix(
   throws IOException;
 
   /**
-   * Get a Map of containerID, containerMetadata of all Containers.
+   * Get a Map of containerID, containerMetadata of all the Containers.
*
* @return Map of containerID -> containerMetadata.
* @throws IOException
*/
   Map getContainers() throws IOException;
 
 Review comment:
   We can remove this method, in your further jira for considering the start 
parameter.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-502935978
 
 
   +1, pending CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294600693
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -181,6 +182,12 @@ public Integer getCountForForContainerKeyPrefix(
   Long containerID = keyValue.getKey().getContainerId();
   Integer numberOfKeys = keyValue.getValue();
 
+  // break the loop if limit has been reached
+  // and one more new entity needs to be added to the containers map
+  if (containers.size() == limit && !containers.containsKey(containerID)) {
 
 Review comment:
   Got it..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar opened a new pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-17 Thread GitBox
sahilTakiar opened a new pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983
 
 
   [HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into 
fs-wide stats](https://issues.apache.org/jira/browse/HADOOP-16379)
   * Adds a new method to `InputStreamStatistics` called `merge` which allows 
users to periodically merge the stats into the fs-wide stats
   * Added new unit tests to validate the calling `unbuffer` merges the stream 
stats into the fs-wide stats
   
   Testing:
   * Ran S3A tests against US East (N. Virginia)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java

2019-06-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866177#comment-16866177
 ] 

Hudson commented on HADOOP-14807:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16765 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16765/])
HADOOP-14807. should prevent the possibility of NPE about (weichiu: rev 
10311c30b02d984a11f2cedfd06eb2a766ad1576)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java


> should prevent the possibility of  NPE about ReconfigurableBase.java
> 
>
> Key: HADOOP-14807
> URL: https://issues.apache.org/jira/browse/HADOOP-14807
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-14807.001.patch
>
>
> 1.NameNode.java may throw a ReconfigurationException which getCause() is null
> {code:title=NameNode.java|borderStyle=solid}  
>   protected String reconfigurePropertyImpl(String property, String newVal)
>   throws ReconfigurationException {
> final DatanodeManager datanodeManager = namesystem.getBlockManager()
> .getDatanodeManager();
> if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) {
>   return reconfHeartbeatInterval(datanodeManager, property, newVal);
> } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) {
>   return reconfHeartbeatRecheckInterval(datanodeManager, property, 
> newVal);
> } else if (property.equals(FS_PROTECTED_DIRECTORIES)) {
>   return reconfProtectedDirectories(newVal);
> } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) {
>   return reconfCallerContextEnabled(newVal);
> } else if (property.equals(ipcClientRPCBackoffEnable)) {
>   return reconfigureIPCBackoffEnabled(newVal);
> } 
>//===
>//here may throw a ReconfigurationException which getCause() is null
>//===
>else {
>   throw new ReconfigurationException(property, newVal, getConf().get(
>   property));
> }
>   }
> {code}
> 2.  ReconfigurationThread.java will call 
> ReconfigurationException.getCause().getMessage() which will cause NPE.
> {code:title=ReconfigurationThread.java|borderStyle=solid}  
> private static class ReconfigurationThread extends Thread {
> private ReconfigurableBase parent;
> ReconfigurationThread(ReconfigurableBase base) {
>   this.parent = base;
> }
> // See {@link ReconfigurationServlet#applyChanges}
> public void run() {
>   LOG.info("Starting reconfiguration task.");
>   final Configuration oldConf = parent.getConf();
>   final Configuration newConf = parent.getNewConf();
>   final Collection changes =
>   parent.getChangedProperties(newConf, oldConf);
>   Map> results = Maps.newHashMap();
>   ConfigRedactor oldRedactor = new ConfigRedactor(oldConf);
>   ConfigRedactor newRedactor = new ConfigRedactor(newConf);
>   for (PropertyChange change : changes) {
> String errorMessage = null;
> String oldValRedacted = oldRedactor.redact(change.prop, 
> change.oldVal);
> String newValRedacted = newRedactor.redact(change.prop, 
> change.newVal);
> if (!parent.isPropertyReconfigurable(change.prop)) {
>   LOG.info(String.format(
>   "Property %s is not configurable: old value: %s, new value: %s",
>   change.prop,
>   oldValRedacted,
>   newValRedacted));
>   continue;
> }
> LOG.info("Change property: " + change.prop + " from \""
> + ((change.oldVal == null) ? "" : oldValRedacted)
> + "\" to \""
> + ((change.newVal == null) ? "" : newValRedacted)
> + "\".");
> try {
>   String effectiveValue =
>   parent.reconfigurePropertyImpl(change.prop, change.newVal);
>   if (change.newVal != null) {
> oldConf.set(change.prop, effectiveValue);
>   } else {
> oldConf.unset(change.prop);
>   }
> } catch (ReconfigurationException e) {
>   //===
>   // here may occurs NPE,  because  e.getCause() may be null.
>   //===
>   errorMessage = e.getCause().getMessage();
> }
> results.put(change, Optional.fromNullable(errorMessage));
>   }
>   synchronized (parent.reconfigLock) {
> parent.endTime = Time.now();
> parent.status = 

[jira] [Updated] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14807:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaodong.hu] pushed to trunk.

> should prevent the possibility of  NPE about ReconfigurableBase.java
> 
>
> Key: HADOOP-14807
> URL: https://issues.apache.org/jira/browse/HADOOP-14807
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-14807.001.patch
>
>
> 1.NameNode.java may throw a ReconfigurationException which getCause() is null
> {code:title=NameNode.java|borderStyle=solid}  
>   protected String reconfigurePropertyImpl(String property, String newVal)
>   throws ReconfigurationException {
> final DatanodeManager datanodeManager = namesystem.getBlockManager()
> .getDatanodeManager();
> if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) {
>   return reconfHeartbeatInterval(datanodeManager, property, newVal);
> } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) {
>   return reconfHeartbeatRecheckInterval(datanodeManager, property, 
> newVal);
> } else if (property.equals(FS_PROTECTED_DIRECTORIES)) {
>   return reconfProtectedDirectories(newVal);
> } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) {
>   return reconfCallerContextEnabled(newVal);
> } else if (property.equals(ipcClientRPCBackoffEnable)) {
>   return reconfigureIPCBackoffEnabled(newVal);
> } 
>//===
>//here may throw a ReconfigurationException which getCause() is null
>//===
>else {
>   throw new ReconfigurationException(property, newVal, getConf().get(
>   property));
> }
>   }
> {code}
> 2.  ReconfigurationThread.java will call 
> ReconfigurationException.getCause().getMessage() which will cause NPE.
> {code:title=ReconfigurationThread.java|borderStyle=solid}  
> private static class ReconfigurationThread extends Thread {
> private ReconfigurableBase parent;
> ReconfigurationThread(ReconfigurableBase base) {
>   this.parent = base;
> }
> // See {@link ReconfigurationServlet#applyChanges}
> public void run() {
>   LOG.info("Starting reconfiguration task.");
>   final Configuration oldConf = parent.getConf();
>   final Configuration newConf = parent.getNewConf();
>   final Collection changes =
>   parent.getChangedProperties(newConf, oldConf);
>   Map> results = Maps.newHashMap();
>   ConfigRedactor oldRedactor = new ConfigRedactor(oldConf);
>   ConfigRedactor newRedactor = new ConfigRedactor(newConf);
>   for (PropertyChange change : changes) {
> String errorMessage = null;
> String oldValRedacted = oldRedactor.redact(change.prop, 
> change.oldVal);
> String newValRedacted = newRedactor.redact(change.prop, 
> change.newVal);
> if (!parent.isPropertyReconfigurable(change.prop)) {
>   LOG.info(String.format(
>   "Property %s is not configurable: old value: %s, new value: %s",
>   change.prop,
>   oldValRedacted,
>   newValRedacted));
>   continue;
> }
> LOG.info("Change property: " + change.prop + " from \""
> + ((change.oldVal == null) ? "" : oldValRedacted)
> + "\" to \""
> + ((change.newVal == null) ? "" : newValRedacted)
> + "\".");
> try {
>   String effectiveValue =
>   parent.reconfigurePropertyImpl(change.prop, change.newVal);
>   if (change.newVal != null) {
> oldConf.set(change.prop, effectiveValue);
>   } else {
> oldConf.unset(change.prop);
>   }
> } catch (ReconfigurationException e) {
>   //===
>   // here may occurs NPE,  because  e.getCause() may be null.
>   //===
>   errorMessage = e.getCause().getMessage();
> }
> results.put(change, Optional.fromNullable(errorMessage));
>   }
>   synchronized (parent.reconfigLock) {
> parent.endTime = Time.now();
> parent.status = Collections.unmodifiableMap(results);
> parent.reconfigThread = null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java

2019-06-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866174#comment-16866174
 ] 

Wei-Chiu Chuang commented on HADOOP-14807:
--

+1

> should prevent the possibility of  NPE about ReconfigurableBase.java
> 
>
> Key: HADOOP-14807
> URL: https://issues.apache.org/jira/browse/HADOOP-14807
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Attachments: HADOOP-14807.001.patch
>
>
> 1.NameNode.java may throw a ReconfigurationException which getCause() is null
> {code:title=NameNode.java|borderStyle=solid}  
>   protected String reconfigurePropertyImpl(String property, String newVal)
>   throws ReconfigurationException {
> final DatanodeManager datanodeManager = namesystem.getBlockManager()
> .getDatanodeManager();
> if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) {
>   return reconfHeartbeatInterval(datanodeManager, property, newVal);
> } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) {
>   return reconfHeartbeatRecheckInterval(datanodeManager, property, 
> newVal);
> } else if (property.equals(FS_PROTECTED_DIRECTORIES)) {
>   return reconfProtectedDirectories(newVal);
> } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) {
>   return reconfCallerContextEnabled(newVal);
> } else if (property.equals(ipcClientRPCBackoffEnable)) {
>   return reconfigureIPCBackoffEnabled(newVal);
> } 
>//===
>//here may throw a ReconfigurationException which getCause() is null
>//===
>else {
>   throw new ReconfigurationException(property, newVal, getConf().get(
>   property));
> }
>   }
> {code}
> 2.  ReconfigurationThread.java will call 
> ReconfigurationException.getCause().getMessage() which will cause NPE.
> {code:title=ReconfigurationThread.java|borderStyle=solid}  
> private static class ReconfigurationThread extends Thread {
> private ReconfigurableBase parent;
> ReconfigurationThread(ReconfigurableBase base) {
>   this.parent = base;
> }
> // See {@link ReconfigurationServlet#applyChanges}
> public void run() {
>   LOG.info("Starting reconfiguration task.");
>   final Configuration oldConf = parent.getConf();
>   final Configuration newConf = parent.getNewConf();
>   final Collection changes =
>   parent.getChangedProperties(newConf, oldConf);
>   Map> results = Maps.newHashMap();
>   ConfigRedactor oldRedactor = new ConfigRedactor(oldConf);
>   ConfigRedactor newRedactor = new ConfigRedactor(newConf);
>   for (PropertyChange change : changes) {
> String errorMessage = null;
> String oldValRedacted = oldRedactor.redact(change.prop, 
> change.oldVal);
> String newValRedacted = newRedactor.redact(change.prop, 
> change.newVal);
> if (!parent.isPropertyReconfigurable(change.prop)) {
>   LOG.info(String.format(
>   "Property %s is not configurable: old value: %s, new value: %s",
>   change.prop,
>   oldValRedacted,
>   newValRedacted));
>   continue;
> }
> LOG.info("Change property: " + change.prop + " from \""
> + ((change.oldVal == null) ? "" : oldValRedacted)
> + "\" to \""
> + ((change.newVal == null) ? "" : newValRedacted)
> + "\".");
> try {
>   String effectiveValue =
>   parent.reconfigurePropertyImpl(change.prop, change.newVal);
>   if (change.newVal != null) {
> oldConf.set(change.prop, effectiveValue);
>   } else {
> oldConf.unset(change.prop);
>   }
> } catch (ReconfigurationException e) {
>   //===
>   // here may occurs NPE,  because  e.getCause() may be null.
>   //===
>   errorMessage = e.getCause().getMessage();
> }
> results.put(change, Optional.fromNullable(errorMessage));
>   }
>   synchronized (parent.reconfigLock) {
> parent.endTime = Time.now();
> parent.status = Collections.unmodifiableMap(results);
> parent.reconfigThread = null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294586933
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
 ##
 @@ -70,13 +70,23 @@ Integer getCountForForContainerKeyPrefix(
   throws IOException;
 
   /**
-   * Get a Map of containerID, containerMetadata of all Containers.
+   * Get a Map of containerID, containerMetadata of all the Containers.
*
* @return Map of containerID -> containerMetadata.
* @throws IOException
*/
   Map getContainers() throws IOException;
 
 Review comment:
   Minor: I see the only new method being used.
   Old method is not being used from the API. I have assumed this will be 
called from service.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294586933
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
 ##
 @@ -70,13 +70,23 @@ Integer getCountForForContainerKeyPrefix(
   throws IOException;
 
   /**
-   * Get a Map of containerID, containerMetadata of all Containers.
+   * Get a Map of containerID, containerMetadata of all the Containers.
*
* @return Map of containerID -> containerMetadata.
* @throws IOException
*/
   Map getContainers() throws IOException;
 
 Review comment:
   Minor: I see the only new method being used.
   Old method is not being used from the API.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-502911684
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 106 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 591 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 925 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 197 | trunk passed |
   | 0 | spotbugs | 352 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 556 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 485 | the patch passed |
   | +1 | compile | 307 | the patch passed |
   | +1 | javac | 307 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 562 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 215 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2013 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7506 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-954/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/954 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d49263eaa075 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6822193 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-954/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-954/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-954/2/testReport/ |
   | Max. process+thread count | 4372 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-954/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15711) Move branch-2 precommit/nightly test builds to java 8

2019-06-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15711:
---
Fix Version/s: 2.9.3
   2.8.6

Cherry-picked this to branch-2.9 and branch-2.8.

> Move branch-2 precommit/nightly test builds to java 8
> -
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Fix For: 2.10.0, 2.8.6, 2.9.3
>
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asagjj commented on issue #974: HDFS-9913.DispCp doesn't use Trash with -delete option

2019-06-17 Thread GitBox
asagjj commented on issue #974: HDFS-9913.DispCp doesn't use Trash with -delete 
option
URL: https://github.com/apache/hadoop/pull/974#issuecomment-502906672
 
 
   @steveloughran @jojochuang  Could you please help to review this commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-502890122
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 533 | trunk passed |
   | +1 | compile | 275 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 324 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 455 | the patch passed |
   | +1 | compile | 291 | the patch passed |
   | +1 | javac | 291 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 550 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 158 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1564 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6523 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.container.common.TestBlockDeletingService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a07622020ada 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/5/testReport/ |
   | Max. process+thread count | 4046 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-502887662
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 539 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 846 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 324 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 514 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 440 | the patch passed |
   | +1 | compile | 272 | the patch passed |
   | +1 | javac | 272 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 619 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | the patch passed |
   | +1 | findbugs | 522 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 142 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1075 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 5967 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 392f2b4411cb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/4/testReport/ |
   | Max. process+thread count | 5105 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #982: HDDS-1702. Optimize Ozone Recon build time

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #982: HDDS-1702. Optimize Ozone Recon build time
URL: https://github.com/apache/hadoop/pull/982#issuecomment-502879511
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 729 | trunk passed |
   | +1 | compile | 367 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 2092 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 573 | the patch passed |
   | +1 | compile | 343 | the patch passed |
   | +1 | javac | 343 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 240 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1852 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 6653 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-982/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/982 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 6cc48fa5e337 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-982/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-982/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-982/1/testReport/ |
   | Max. process+thread count | 3168 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-982/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294548907
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -646,6 +646,17 @@
   production environments.
 
   
+  
 
 Review comment:
   
https://cwiki.apache.org/confluence/display/HADOOP/Java-based+configuration+API


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294545140
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -181,6 +182,12 @@ public Integer getCountForForContainerKeyPrefix(
   Long containerID = keyValue.getKey().getContainerId();
   Integer numberOfKeys = keyValue.getValue();
 
+  // break the loop if limit has been reached
+  // and one more new entity needs to be added to the containers map
+  if (containers.size() == limit && !containers.containsKey(containerID)) {
 
 Review comment:
   Without the second condition, the last container ID will have incorrect 
number of keys in its containerMetadata. Even when containers limit is reached, 
the next iteration could contain the same container ID with a different key 
prefix. To handle this situation, we should not break the iterator until the 
second condition is also met.
   
   I don't think we need to add any check for any other negative values since 
none of them match the condition `containers.size() == limit` at any time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294543803
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
 ##
 @@ -208,6 +209,15 @@ void initializeConfiguration() throws IOException {
 // Set metadata/DB dir base path
 String metaDirPath = path + "/" + nodeId;
 conf.set(OZONE_METADATA_DIRS, metaDirPath);
+
+// If wal Dir is set, as in OM HA setup all OM's will be on the
+// same node, wal directories will conflict with each other.
+// Append nodeId similar to metaDirPath.
+String walDir = conf.get(OMConfigKeys.OZONE_OM_DB_WAL_DIR);
 
 Review comment:
   New Configs please.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294543691
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestDBStoreBuilder.java
 ##
 @@ -71,12 +71,26 @@ public void builderWithOneParamV2() throws IOException {
 if(!newFolder.exists()) {
   Assert.assertTrue(newFolder.mkdirs());
 }
-thrown.expect(IOException.class);
 DBStoreBuilder.newBuilder(conf)
 .setPath(newFolder.toPath())
 .build();
   }
 
+  @Test
+  public void builderWithWalDirSet() throws IOException {
+Configuration conf = new Configuration();
+File newFolder = folder.newFolder();
+File walDir = folder.newFolder();
+if(!newFolder.exists()) {
+  Assert.assertTrue(newFolder.mkdirs());
+}
+DBStoreBuilder.newBuilder(conf)
+.setPath(newFolder.toPath())
 
 Review comment:
   This test illustrates my concern. We will add this feature for performance 
reasons but we really have no data to show or prove this is the root cause of 
perf issues. Neither can we test it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294543251
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
 ##
 @@ -220,4 +227,15 @@ private File getDBFile() throws IOException {
 return Paths.get(dbPath.toString(), dbname).toFile();
   }
 
+  private File getWALFile() throws IOException {
+if (walPath == null) {
+  LOG.error("Write-ahead-log path is " +
 
 Review comment:
   I would think that most people will never even add WAL as an option, so this 
should not be an error IMHO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294543079
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -35,6 +35,7 @@
   // metadata dir but in future we may support multiple for redundancy or
   // performance.
   public static final String OZONE_SCM_DB_DIRS = "ozone.scm.db.dirs";
+  public static final String OZONE_SCM_DB_WAL_DIR = "ozone.scm.db.wal.dir";
 
 Review comment:
   Do we have data that indicates the compaction is the root cause of issues? 
On a normal SSD, we have around 80K IOPS are we saying that we are need more 
bandwidth than that ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294543351
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -646,6 +646,17 @@
   production environments.
 
   
+  
+ozone.om.db.wal.dir
 
 Review comment:
   Use new style Config options please.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294540185
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -35,6 +35,7 @@
   // metadata dir but in future we may support multiple for redundancy or
   // performance.
   public static final String OZONE_SCM_DB_DIRS = "ozone.scm.db.dirs";
+  public static final String OZONE_SCM_DB_WAL_DIR = "ozone.scm.db.wal.dir";
 
 
 Review comment:
   Do we know that WAL performance is being compromised today ? This adds a new 
config value, but are we sure about the benefit ? In other words, are we sure 
that ozone is being bottlenecked by RocksDB? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294540670
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -2327,6 +2349,17 @@
   production environments.
 
   
+  
+ozone.recon.db.wal.dir
 
 Review comment:
   same.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294540555
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -646,6 +646,17 @@
   production environments.
 
   
+  
 
 Review comment:
   Can we please use the new config style?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294540636
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -699,6 +710,17 @@
   production environments.
 
   
+  
+ozone.scm.db.wal.dir
 
 Review comment:
   same comment as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
anuengineer commented on a change in pull request #981: HDDS-1696. RocksDB use 
separate Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#discussion_r294540419
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
 ##
 @@ -220,4 +227,15 @@ private File getDBFile() throws IOException {
 return Paths.get(dbPath.toString(), dbname).toFile();
   }
 
+  private File getWALFile() throws IOException {
+if (walPath == null) {
+  LOG.error("Write-ahead-log path is " +
 
 Review comment:
   Ok, in most cases this would not happen. No one would map a WAL to a 
different disk. If it is mapped, then we can probably log that we detected it 
is mapped. This is certainly not an error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-502869852
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 34 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1163 | trunk passed |
   | +1 | compile | 1062 | trunk passed |
   | +1 | checkstyle | 151 | trunk passed |
   | +1 | mvnsite | 125 | trunk passed |
   | +1 | shadedclient | 1087 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 95 | trunk passed |
   | 0 | spotbugs | 137 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 291 | trunk passed |
   | -0 | patch | 199 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 89 | the patch passed |
   | +1 | compile | 1268 | the patch passed |
   | +1 | javac | 1268 | the patch passed |
   | -0 | checkstyle | 157 | root: The patch generated 40 new + 109 unchanged - 
2 fixed = 149 total (was 111) |
   | +1 | mvnsite | 129 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 783 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 33 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged 
- 0 fixed = 5 total (was 1) |
   | +1 | findbugs | 204 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 548 | hadoop-common in the patch passed. |
   | +1 | unit | 304 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7848 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux a9f8d5df3c95 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/8/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/8/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/8/testReport/ |
   | Max. process+thread count | 1715 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294536580
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 
 Review comment:
   Okay. Then I feel we can have 2 methods, getContainers(), getContainers(int 
limit)
   In this way, we don't need any special considerations in underlying 
implementation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #978: HDDS-1694. TestNodeReportHandler is 
failing with NPE
URL: https://github.com/apache/hadoop/pull/978#issuecomment-502865680
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 510 | trunk passed |
   | +1 | compile | 302 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 886 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 526 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 309 | the patch passed |
   | +1 | javac | 309 | the patch passed |
   | +1 | checkstyle | 97 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 675 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 547 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 147 | hadoop-hdds in the patch failed. |
   | -1 | unit |  | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6271 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/978 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 11005dea4bd5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/2/testReport/ |
   | Max. process+thread count | 4854 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
bharatviswa504 commented on issue #871: HDDS-1579. Create OMDoubleBuffer 
metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-502864298
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
bharatviswa504 commented on issue #871: HDDS-1579. Create OMDoubleBuffer 
metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-502864271
 
 
   Thank You @hanishakoneru for the review.
   I have addressed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#discussion_r294532321
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -149,6 +160,23 @@ private void cleanupCache(long lastRatisTransactionIndex) 
{
 omMetadataManager.getBucketTable().cleanupCache(lastRatisTransactionIndex);
   }
 
+  /**
+   * Set OzoneManagerDoubleBuffer metrics values.
+   * @param flushedTransactionsSize
+   */
+  private void setOzoneManagerDoubleBufferMetrics(
+  long flushedTransactionsSize) {
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#discussion_r294531785
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.metrics;
+
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * Class which maintains metrics related to OzoneManager DoubleBuffer.
+ */
+public class OzoneManagerDoubleBufferMetrics {
+
+  private static final String SOURCE_NAME =
+  OzoneManagerDoubleBufferMetrics.class.getSimpleName();
+
+  @Metric(about = "Total Number of flush iterations happened in " +
+  "OzoneManagerDoubleBuffer.")
+  private MutableCounterLong totalNumOfFlushIterations;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#discussion_r294531695
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.metrics;
+
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * Class which maintains metrics related to OzoneManager DoubleBuffer.
+ */
+public class OzoneManagerDoubleBufferMetrics {
+
+  private static final String SOURCE_NAME =
+  OzoneManagerDoubleBufferMetrics.class.getSimpleName();
+
+  @Metric(about = "Total Number of flush iterations happened in " +
+  "OzoneManagerDoubleBuffer.")
+  private MutableCounterLong totalNumOfFlushIterations;
+
+  @Metric(about = "Total Number of flushed transactions happened in " +
+  "OzoneManagerDoubleBuffer.")
+  private MutableCounterLong totalNumOfFlushedTransactions;
+
+  @Metric(about = "Max Number of transactions flushed in a iteration in " +
+  "OzoneManagerDoubleBuffer. This will provide a value which is maximum " +
+  "number of transactions flushed in a single flush iteration till now.")
+  private MutableCounterLong maxNumberOfTransactionsFlushedInOneIteration;
+
+
+  public static OzoneManagerDoubleBufferMetrics create() {
+MetricsSystem ms = DefaultMetricsSystem.instance();
+return ms.register(SOURCE_NAME,
+"OzoneManager DoubleBuffer Metrics",
+new OzoneManagerDoubleBufferMetrics());
+  }
+
+  public void incTotalNumOfFlushIterations() {
+this.totalNumOfFlushIterations.incr();
+  }
+
+  public void setTotalSizeOfFlushedTransactions(
+  long flushedTransactions) {
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-17 Thread GitBox
hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis 
related dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965#issuecomment-502861470
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294527606
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -200,56 +201,67 @@ protected void configure() {
   @Test
   public void testGetKeysForContainer() {
 
-Response response = containerKeyService.getKeysForContainer(1L);
+Response response = containerKeyService.getKeysForContainer(1L, 2);
 
 Collection keyMetadataList =
 (Collection) response.getEntity();
-assertTrue(keyMetadataList.size() == 2);
+assertEquals(keyMetadataList.size(), 2);
 
 
 Review comment:
   Can we add tests with default value of limit also?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294527606
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -200,56 +201,67 @@ protected void configure() {
   @Test
   public void testGetKeysForContainer() {
 
-Response response = containerKeyService.getKeysForContainer(1L);
+Response response = containerKeyService.getKeysForContainer(1L, 2);
 
 Collection keyMetadataList =
 (Collection) response.getEntity();
-assertTrue(keyMetadataList.size() == 2);
+assertEquals(keyMetadataList.size(), 2);
 
 
 Review comment:
   Can we add tests with default value of limit also?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294527750
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -200,56 +201,67 @@ protected void configure() {
   @Test
   public void testGetKeysForContainer() {
 
-Response response = containerKeyService.getKeysForContainer(1L);
+Response response = containerKeyService.getKeysForContainer(1L, 2);
 
 Review comment:
   Can we add tests with default value of limit also?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #981: HDDS-1696. RocksDB use separate 
Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#issuecomment-502859264
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1002 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 90 | Maven dependency ordering for branch |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 291 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 850 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 360 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 561 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 304 | the patch passed |
   | +1 | javac | 304 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 587 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 109 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2171 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8482 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.utils.db.TestDBStoreBuilder |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/981 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e53471f1b77a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/testReport/ |
   | Max. process+thread count | 4043 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-981/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294527046
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 
 Review comment:
   Yes, that's correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294526542
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -181,6 +182,12 @@ public Integer getCountForForContainerKeyPrefix(
   Long containerID = keyValue.getKey().getContainerId();
   Integer numberOfKeys = keyValue.getValue();
 
+  // break the loop if limit has been reached
+  // and one more new entity needs to be added to the containers map
+  if (containers.size() == limit && !containers.containsKey(containerID)) {
 
 Review comment:
   Not understood the reason for the 2nd condition.
   And this method can be called with limit -1, Can you update java doc with 
description, what happens in that case. And do we need to add any check for 
other negative values for this limit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294526542
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -181,6 +182,12 @@ public Integer getCountForForContainerKeyPrefix(
   Long containerID = keyValue.getKey().getContainerId();
   Integer numberOfKeys = keyValue.getValue();
 
+  // break the loop if limit has been reached
+  // and one more new entity needs to be added to the containers map
+  if (containers.size() == limit && !containers.containsKey(containerID)) {
 
 Review comment:
   Not understood the reason for the 2nd condition.
   And this method can be called with limit -1, Can you update java doc with 
description, what happens in that case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294525907
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 
 Review comment:
   Default value of the query param limit is -1.
   So, in the case of -1, it behaves like normal, it lists all the containers?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r294525907
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 
 Review comment:
   Default value of the query param limit is -1.
   So, in the case of -1, it behaves like normal, it lists all the containers. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.

2019-06-17 Thread GitBox
xiaoyuyao commented on a change in pull request #973: HDDS-1611. Evaluate ACL 
on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r294524841
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -163,10 +164,49 @@ public boolean hasAccess(OzoneAclInfo acl) {
 if (aclBitSet == null) {
   return false;
 }
+BitSet result = BitSet.valueOf(acl.getRights().toByteArray());
+result.and(aclBitSet);
+return !result.equals(ZERO_BITSET);
+  }
+
+  /**
+   * Ror a given acl, check if the user has access rights.
+   * @param acl
+   * @param aclType
+   * @param ugi
+   *
+   * @return true if given ugi has acl set, else false.
+   * */
+  public boolean hasAccess(ACLType acl, ACLIdentityType aclType,
 
 Review comment:
   Discussed offline, we need to handle different identity types. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-06-17 Thread GitBox
hanishakoneru commented on a change in pull request #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#discussion_r294524066
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.metrics;
+
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * Class which maintains metrics related to OzoneManager DoubleBuffer.
+ */
+public class OzoneManagerDoubleBufferMetrics {
+
+  private static final String SOURCE_NAME =
+  OzoneManagerDoubleBufferMetrics.class.getSimpleName();
+
+  @Metric(about = "Total Number of flush iterations happened in " +
+  "OzoneManagerDoubleBuffer.")
+  private MutableCounterLong totalNumOfFlushIterations;
 
 Review comment:
   maxNumberOfTransactionsFlushedInOneIteration seems correct and it is clear 
what it is implying.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #982: HDDS-1702. Optimize Ozone Recon build time

2019-06-17 Thread GitBox
vivekratnavel commented on issue #982: HDDS-1702. Optimize Ozone Recon build 
time
URL: https://github.com/apache/hadoop/pull/982#issuecomment-502848781
 
 
   @avijayanhwx @swagle @anuengineer @elek Please review when you find time. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.

2019-06-17 Thread GitBox
xiaoyuyao commented on a change in pull request #973: HDDS-1611. Evaluate ACL 
on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r294514957
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -265,4 +272,89 @@ public static long getTimeDurationInMS(Configuration 
conf, String key,
 }
 return listOfAcls;
   }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+for (OzoneAclInfo a : acls) {
+  BitSet rights = BitSet.valueOf(a.getRights().toByteArray());
+  switch (a.getType()) {
+  case USER:
+if (a.getName().equals(context.getClientUgi().getUserName())) {
 
 Review comment:
   Can we move context.getClientUgi().getUserName() out of the for loop?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #982: HDDS-1702. Optimize Ozone Recon build time

2019-06-17 Thread GitBox
vivekratnavel commented on issue #982: HDDS-1702. Optimize Ozone Recon build 
time
URL: https://github.com/apache/hadoop/pull/982#issuecomment-502848572
 
 
   /label ozone
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #982: HDDS-1702. Optimize Ozone Recon build time

2019-06-17 Thread GitBox
vivekratnavel opened a new pull request #982: HDDS-1702. Optimize Ozone Recon 
build time
URL: https://github.com/apache/hadoop/pull/982
 
 
   Currently, hadoop-ozone-recon node_modules folder is copied to target folder 
and this takes a lot of time while building hadoop-ozone project. 
   
   This PR reduces the build time by excluding node_modules folder. With this 
patch it only takes approximately ~10 seconds to compile Recon.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16379) S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-17 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HADOOP-16379:
--
Summary: S3AInputStream#unbuffer should merge input stream stats into 
fs-wide stats  (was: S3AInputStream#unbuffer should merge input stream stats 
fs-wide stats)

> S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
> --
>
> Key: HADOOP-16379
> URL: https://issues.apache.org/jira/browse/HADOOP-16379
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> HADOOP-14747 added support for {{CanUnbuffer}} to S3A. One issue for 
> applications that rely on the {{CanUnbuffer}} functionality, is that the 
> current S3A code only merges input stream stats when the stream is closed. 
> Applications using {{CanUnbuffer}} might not close streams for a long time, 
> so any stats reported by input streams don't get reported in the 
> filesystem-wide metrics.
> This JIRA proposes merging input stream statistics when 
> {{S3AInputStream#unbuffer}} is called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-17 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16095.

Resolution: Fixed

All related tasks have been closed, mark this as resolved. 

Thank you, [~Prabhu Joseph] for the patches.

Thank you, [~lmccay], [~sunilg], and [~jojochuang] for input and reviews.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865946#comment-16865946
 ] 

Steve Loughran commented on HADOOP-15183:
-

PR is all updated.

Seeing repeatable tests of the rm / operation, which is interesting. It looks 
like delete of a root listing isn't taking with -Ddynamo -Dauth; I am wondering 
if the fact that root is "special" is causing the fun here. Will debug. Not 
sure if this is this patch or HADOOP-16279

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt, rmtestfailure.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-17 Thread Greg Senia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-16350:

Attachment: (was: HADOOP-16350.patch)

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)
> at 
> 

[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-17 Thread Greg Senia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-16350:

Status: Open  (was: Patch Available)

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.2, 2.7.6, 3.0.0, 2.8.3
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[jira] [Assigned] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-17 Thread Greg Senia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia reassigned HADOOP-16350:
---

Assignee: (was: Greg Senia)

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15183:

Attachment: rmtestfailure.txt

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt, rmtestfailure.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
elek commented on a change in pull request #978: HDDS-1694. 
TestNodeReportHandler is failing with NPE
URL: https://github.com/apache/hadoop/pull/978#discussion_r294497236
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -221,9 +221,8 @@ public VersionResponse getVersion(SCMVersionRequestProto 
versionRequest) {
 return VersionResponse.newBuilder()
 .setVersion(this.version.getVersion())
 .addValue(OzoneConsts.SCM_ID,
-this.scmManager.getScmStorageConfig().getScmId())
-.addValue(OzoneConsts.CLUSTER_ID, this.scmManager.getScmStorageConfig()
-.getClusterID())
+this.scmStorageConfig.getScmId())
+.addValue(OzoneConsts.CLUSTER_ID, this.scmStorageConfig.getClusterID())
 
 Review comment:
   Good point, thanks. ClusterId field is removed in 50420c21c8d.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-17 Thread K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

K S updated HADOOP-16378:
-
Priority: Critical  (was: Major)

> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11).
>Reporter: K S
>Priority: Critical
>
> Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
> accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix -HADOOP-9652-. Could also fix in conjunction with 
> HADOOP-8772.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-17 Thread K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

K S updated HADOOP-16378:
-
Description: 
Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem would 
fix this issue, though we never tested this, and the flag was implemented to 
fix -HADOOP-9652-. Could also fix in conjunction with HADOOP-8772.

 

 

 

 

  was:
Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem would 
fix this issue, though we never tested this, and the flag was implemented to 
fix HADOOP-9652

 

 

 

 


> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11).
>Reporter: K S
>Priority: Major
>
> Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
> accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix -HADOOP-9652-. Could also fix in conjunction with 
> HADOOP-8772.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread GitBox
steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-502829225
 
 
   ok, I untintentionally forced push rather than closed. Hopefully that won't 
be too disruptive. If need be I can switch to a new PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
bharatviswa504 commented on a change in pull request #978: HDDS-1694. 
TestNodeReportHandler is failing with NPE
URL: https://github.com/apache/hadoop/pull/978#discussion_r294491156
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -221,9 +221,8 @@ public VersionResponse getVersion(SCMVersionRequestProto 
versionRequest) {
 return VersionResponse.newBuilder()
 .setVersion(this.version.getVersion())
 .addValue(OzoneConsts.SCM_ID,
-this.scmManager.getScmStorageConfig().getScmId())
-.addValue(OzoneConsts.CLUSTER_ID, this.scmManager.getScmStorageConfig()
-.getClusterID())
+this.scmStorageConfig.getScmId())
+.addValue(OzoneConsts.CLUSTER_ID, this.scmStorageConfig.getClusterID())
 
 Review comment:
   Minor: We can use clusterID which is set in SCMNodeManager constructor. Or 
remove the clusterID and use that from scmStorage. I feel 2nd option will be 
good, as to get ScmId we got from scmStorageConfig


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #980: Update RocksDB version to 6.0.1

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980#issuecomment-502827874
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 692 | trunk passed |
   | +1 | compile | 305 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1874 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 516 | the patch passed |
   | +1 | compile | 311 | the patch passed |
   | -1 | javac | 98 | hadoop-hdds generated 3 new + 11 unchanged - 0 fixed = 
14 total (was 11) |
   | -1 | javac | 213 | hadoop-ozone generated 1 new + 7 unchanged - 0 fixed = 
8 total (was 7) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 186 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1387 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 5713 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/980 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 80ae802a1ddd 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3d020e9 |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/testReport/ |
   | Max. process+thread count | 4643 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-980/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16379) S3AInputStream#unbuffer should merge input stream stats fs-wide stats

2019-06-17 Thread Sahil Takiar (JIRA)
Sahil Takiar created HADOOP-16379:
-

 Summary: S3AInputStream#unbuffer should merge input stream stats 
fs-wide stats
 Key: HADOOP-16379
 URL: https://issues.apache.org/jira/browse/HADOOP-16379
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Sahil Takiar
Assignee: Sahil Takiar


HADOOP-14747 added support for {{CanUnbuffer}} to S3A. One issue for 
applications that rely on the {{CanUnbuffer}} functionality, is that the 
current S3A code only merges input stream stats when the stream is closed. 
Applications using {{CanUnbuffer}} might not close streams for a long time, so 
any stats reported by input streams don't get reported in the filesystem-wide 
metrics.

This JIRA proposes merging input stream statistics when 
{{S3AInputStream#unbuffer}} is called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for 
readFully; add readFully to ByteBufferPositionedReadable
URL: https://github.com/apache/hadoop/pull/963#issuecomment-502824646
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1084 | trunk passed |
   | +1 | compile | 1054 | trunk passed |
   | +1 | checkstyle | 132 | trunk passed |
   | +1 | mvnsite | 214 | trunk passed |
   | +1 | shadedclient | 999 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 23 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 23 | branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 152 | the patch passed |
   | +1 | compile | 1017 | the patch passed |
   | +1 | cc | 1017 | the patch passed |
   | +1 | javac | 1017 | the patch passed |
   | +1 | checkstyle | 137 | root: The patch generated 0 new + 111 unchanged - 
1 fixed = 111 total (was 112) |
   | +1 | mvnsite | 219 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 645 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | 0 | findbugs | 24 | hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 551 | hadoop-common in the patch passed. |
   | +1 | unit | 123 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 6208 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 376 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 63 | The patch does not generate ASF License warnings. |
   | | | 14271 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
   |   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestWriteRead |
   |   | hadoop.hdfs.TestFileCorruption |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/963 |
   | JIRA Issue | HDFS-14564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1c0beea80a3c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 22b36dd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/4/testReport/ |
   | Max. process+thread count | 3964 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2019-06-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13600:

   Resolution: Duplicate
Fix Version/s: HADOOP-15183
   Status: Resolved  (was: Patch Available)

HADOOP-15183 does this as part of the support for partial rename failures: it 
schedules each copy into its own thread, runs a hard-coded 10 renames at a 
time, waiting for all ten to complete before moving on.

No attempt to be clever about sorting big files first so the size of each page 
is ~the same, or other performance tunings. I'm trying to make things a bit 
faster without overloading anything from local thread pools to the S3 shards

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Major
> Fix For: HADOOP-15183
>
> Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979#issuecomment-502815045
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 543 | trunk passed |
   | +1 | compile | 322 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 802 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 199 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 491 | the patch passed |
   | +1 | compile | 325 | the patch passed |
   | +1 | javac | 325 | the patch passed |
   | -1 | hadolint | 4 | The patch generated 3 new + 14 unchanged - 3 fixed = 
17 total (was 17) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 2 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 711 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 211 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1120 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 71 | The patch does not generate ASF License warnings. |
   | | | 5308 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/979 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml hadolint yamllint |
   | uname | Linux a438c40f65f8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 22b36dd |
   | Default Java | 1.8.0_212 |
   | hadolint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/artifact/out/diff-patch-hadolint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/testReport/ |
   | Max. process+thread count | 4806 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-17 Thread GitBox
bharatviswa504 opened a new pull request #981: HDDS-1696. RocksDB use separate 
Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-17 Thread K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

K S updated HADOOP-16378:
-
Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists on 
later versions of Hadoop as well), Java 8 ( + Java 11).  (was: Ubuntu 18.04, 
Hadoop 2.7.3 (Though this problem exists on later versions of Hadoop as well), 
Java 8 ( + Java 11))
Description: 
Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem would 
fix this issue, though we never tested this, and the flag was implemented to 
fix HADOOP-9652

 

 

 

 

  was:
Bug occurs when Hadoop creates temporary ".nfs*" files as part of file moves 
and accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem would 
fix this issue, though we never tested this, and the flag was implemented to 
fix HADOOP-9652

 

 

 

 


> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11).
>Reporter: K S
>Priority: Major
>
> Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
> accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix HADOOP-9652
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8

2019-06-17 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865897#comment-16865897
 ] 

Sahil Takiar commented on HADOOP-16371:
---

[~ste...@apache.org] PR is ready for review: 
[https://github.com/apache/hadoop/pull/970] let me know what you think

> Option to disable GCM for SSL connections when running on Java 8
> 
>
> Key: HADOOP-16371
> URL: https://issues.apache.org/jira/browse/HADOOP-16371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> This was the original objective of HADOOP-16050. HADOOP-16050 was changed to 
> mimic HADOOP-15669 and added (or attempted to add) support for 
> Wildfly-OpenSSL in S3A.
> Due to the number of issues have seen with S3A + WildFly OpenSSL (see 
> HADOOP-16346), HADOOP-16050 was reverted.
> As shown in the description of HADOOP-16050, and the analysis done in 
> HADOOP-15669, GCM has major performance issues when running on Java 8. 
> Removing it from the list of available ciphers can drastically improve 
> performance, perhaps not as much as using OpenSSL, but still a considerable 
> amount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #980: Update RocksDB version to 6.0.1

2019-06-17 Thread GitBox
avijayanhwx opened a new pull request #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980
 
 
   In RocksDb 6.0.1, some useful tuning features were brought into the JNI API. 
We need to upgrade the version in ozone to pick those up.
   
   https://github.com/facebook/rocksdb/pull/4833


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #957: HDDS-1532. Improve the concurrent testing framework of Freon.

2019-06-17 Thread GitBox
anuengineer merged pull request #957: HDDS-1532. Improve the concurrent testing 
framework of Freon.
URL: https://github.com/apache/hadoop/pull/957
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #957: HDDS-1532. Improve the concurrent testing framework of Freon.

2019-06-17 Thread GitBox
anuengineer commented on issue #957: HDDS-1532. Improve the concurrent testing 
framework of Freon.
URL: https://github.com/apache/hadoop/pull/957#issuecomment-502791637
 
 
   Thank you for this excellent patch. It is wonderful and brilliant patch. 
Really appreciate you taking care of this. @nandakumar131 @avijayanhwx 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-17 Thread K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

K S updated HADOOP-16378:
-
Affects Version/s: 3.3.0
  Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
on later versions of Hadoop as well), Java 8 ( + Java 11)
  Description: 
Bug occurs when Hadoop creates temporary ".nfs*" files as part of file moves 
and accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem would 
fix this issue, though we never tested this, and the flag was implemented to 
fix HADOOP-9652

 

 

 

 

  was:
Bug occurs when Hadoop creates temporary ".nfs*" files as part of file moves 
and accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently. On MacOS (due to 
differences in how `ls` returns status codes) this should not fail. 

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2.

 

 

 

 


> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11)
>Reporter: K S
>Priority: Major
>
> Bug occurs when Hadoop creates temporary ".nfs*" files as part of file moves 
> and accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix HADOOP-9652
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
xiaoyuyao commented on issue #978: HDDS-1694. TestNodeReportHandler is failing 
with NPE
URL: https://github.com/apache/hadoop/pull/978#issuecomment-502788918
 
 
   Thanks @elek  for fixing this. Change LGTM, +1. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-17 Thread K S (JIRA)
K S created HADOOP-16378:


 Summary: RawLocalFileStatus throws exception if a file is created 
and deleted quickly
 Key: HADOOP-16378
 URL: https://issues.apache.org/jira/browse/HADOOP-16378
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: K S


Bug occurs when Hadoop creates temporary ".nfs*" files as part of file moves 
and accesses. If this file is deleted very quickly after being created, a 
RuntimeException is thrown. The root cause is in the loadPermissionInfo method 
in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission info, it 
first does

 
{code:java}
ls -ld{code}
 and then attempts to get permissions info about each file. If a file 
disappears between these two steps, an exception is thrown.

*Reproduction Steps:*

An isolated way to reproduce the bug is to run FileInputFormat.listStatus over 
and over on the same dir that we’re creating those temp files in. On Ubuntu or 
any other Linux-based system, this should fail intermittently. On MacOS (due to 
differences in how `ls` returns status codes) this should not fail. 

*Fix:*

One way in which we managed to fix this was to ignore the exception being 
thrown in loadPemissionInfo() if the exit code is 1 or 2.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-06-17 Thread GitBox
elek opened a new pull request #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979
 
 
   Since HDDS-1634 we have an ozone specific runner image to run ozone with 
docker-compose based pseudo clusters.
   
   As the new apache/ozone-runner image is uploaded to the dockerhub we can 
switch our scripts and use the new image.
   
   See: https://issues.apache.org/jira/browse/HDDS-1698


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #978: HDDS-1694. TestNodeReportHandler is 
failing with NPE
URL: https://github.com/apache/hadoop/pull/978#issuecomment-502781129
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 473 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 532 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 931 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 364 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 579 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 471 | the patch passed |
   | +1 | compile | 295 | the patch passed |
   | +1 | javac | 295 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 210 | the patch passed |
   | +1 | findbugs | 680 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 185 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1400 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7330 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.common.impl.TestHddsDispatcher |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/978 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ba59447e4d86 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 22b36dd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/1/testReport/ |
   | Max. process+thread count | 4121 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-978/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-17 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865709#comment-16865709
 ] 

Gabor Bota commented on HADOOP-13980:
-

Also, I agree with [~fabbri]'s idea that we should not consider "auth mode" as 
a factor.
We just check if the directory with is_auth==true is actually contains the same 
contents the bucket on s3. No need for check for the configuration if auth mode 
is enabled. Just the flag on the dir listing.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
elek commented on issue #978: HDDS-1694. TestNodeReportHandler is failing with 
NPE
URL: https://github.com/apache/hadoop/pull/978#issuecomment-502737684
 
 
   I fixed the constructor of SCMNodeManager to have only the [minimal external 
dependencies](https://en.wikipedia.org/wiki/Law_of_Demeter) (to make it easier 
to unit test). For example now it has SCMStorageConfig as constructor parameter 
instead of StorageContainerManager as the StorageContaienrManager is used only 
to get SCMStorageConfig. Having the SCMStorageConfig in constructor makes the 
dependency more visible.
   
   Strictly speaking the NPE can be fixed with half of the patch but as It's a 
very small patch I think it's acceptable. (But let me know if you prefer to 
manage the changes in two patches)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #978: HDDS-1694. TestNodeReportHandler is failing with NPE

2019-06-17 Thread GitBox
elek opened a new pull request #978: HDDS-1694. TestNodeReportHandler is 
failing with NPE
URL: https://github.com/apache/hadoop/pull/978
 
 
   {code:java}
   FAILURE in 
ozone-unit-076618677d39x4h9/unit/hadoop-hdds/server-scm/org.apache.hadoop.hdds.scm.node.TestNodeReportHandler.txt
   
---
   Test set: org.apache.hadoop.hdds.scm.node.TestNodeReportHandler
   
---
   Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.43 s <<< 
FAILURE! - in org.apache.hadoop.hdds.scm.node.TestNodeReportHandler
   testNodeReport(org.apache.hadoop.hdds.scm.node.TestNodeReportHandler)  Time 
elapsed: 0.288 s  <<< ERROR!
   java.lang.NullPointerException
       at 
org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:122)
       at 
org.apache.hadoop.hdds.scm.node.TestNodeReportHandler.resetEventCollector(TestNodeReportHandler.java:53)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
       at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
       at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
       at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
       at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
       at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
       at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
       at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
       at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
       at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
       at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
       at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
       at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
       at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
       at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
       at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
       at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   
   2019-06-16 23:52:29,345 INFO  node.SCMNodeManager 
(SCMNodeManager.java:(119)) - Entering startup safe mode.
   
   {code}
   
   See: https://issues.apache.org/jira/browse/HDDS-1694


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-17 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865663#comment-16865663
 ] 

Gabor Bota commented on HADOOP-13980:
-

Thanks [~ste...@apache.org], [~fabbri] for the spec draft and for the ideas. I 
started to create a draft for specs a while ago, but haven't updated much 
lately, so I took [~ste...@apache.org]'s ideas an put it to my docs and 
reformatted it.
Here's the doc: 
https://docs.google.com/document/d/1Gcl_dVLl0x7PCxfsFjp-ClBlbP4klo9hjocOKCGTi3s/edit?usp=sharing

All of you are welcome to comment in the doc. Note that it's still WIP and will 
be updated. The final version will be uploaded here before/during the 
implementation.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread GitBox
steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-502698436
 
 
   I'm going to close this and re-open a new patch with everything merged atop 
the OOB patch. It's not that they conflict functionality-wise, it's just as 
they both pass down a param to the metastore put operations, they create merge 
conflict.
   
   FWIW, I'm now unsure *why* the TTL needs to go down, rather than set during 
the init phase


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r294314480
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##
 @@ -699,39 +769,168 @@ DirListingMetadata 
getDirListingMetadataFromDirMetaAndList(Path path,
   }
 
   /**
-   * build the list of all parent entries.
+   * Build the list of all parent entries.
+   * 
+   * Thread safety: none. Callers must synchronize access.
+   * 
+   * Callers are required to synchronize on ancestorState.
* @param pathsToCreate paths to create
+   * @param ancestorState ongoing ancestor state.
* @return the full ancestry paths
*/
-  Collection completeAncestry(
-  Collection pathsToCreate) {
-// Key on path to allow fast lookup
-Map ancestry = new HashMap<>();
-
-for (DDBPathMetadata meta : pathsToCreate) {
+  private Collection completeAncestry(
+  final Collection pathsToCreate,
+  final AncestorState ancestorState) throws PathIOException {
+List ancestorsToAdd = new ArrayList<>(0);
+LOG.debug("Completing ancestry for {} paths", pathsToCreate.size());
+// we sort the inputs to guarantee that the topmost entries come first.
+// that way if the put request contains both parents and children
+// then the existing parents will not be re-created -they will just
+// be added to the ancestor list first.
+List sortedPaths = new ArrayList<>(pathsToCreate);
+sortedPaths.sort(PathOrderComparators.TOPMOST_PM_FIRST);
+for (DDBPathMetadata meta : sortedPaths) {
   Preconditions.checkArgument(meta != null);
   Path path = meta.getFileStatus().getPath();
+  LOG.debug("Adding entry {}", path);
   if (path.isRoot()) {
 break;
   }
-  ancestry.put(path, new DDBPathMetadata(meta));
+  // add the new entry
+  DDBPathMetadata entry = new DDBPathMetadata(meta);
+  DDBPathMetadata oldEntry = ancestorState.put(path, entry);
+  if (oldEntry != null) {
+if (!oldEntry.getFileStatus().isDirectory()
+|| !entry.getFileStatus().isDirectory()) {
+  // check for and warn if the existing bulk operation overwrote it.
+  // this should never occur outside tests explicitly crating it
+  LOG.warn("Overwriting a S3Guard entry created in the operation: {}",
+  oldEntry);
+  LOG.warn("With new entry: {}", entry);
+  // restore the old state
+  ancestorState.put(path, oldEntry);
+  // then raise an exception
+  throw new PathIOException(path.toString(), E_INCONSISTENT_UPDATE);
+} else {
+  // directory is already present, so skip adding it and any parents.
+  continue;
+}
+  }
+  ancestorsToAdd.add(entry);
   Path parent = path.getParent();
-  while (!parent.isRoot() && !ancestry.containsKey(parent)) {
+  while (!parent.isRoot()) {
+if (ancestorState.findEntry(parent, true)) {
+  break;
+}
 LOG.debug("auto-create ancestor path {} for child path {}",
 parent, path);
 final S3AFileStatus status = makeDirStatus(parent, username);
-ancestry.put(parent, new DDBPathMetadata(status, Tristate.FALSE,
-false));
+DDBPathMetadata md = new DDBPathMetadata(status, Tristate.FALSE,
+false);
+ancestorState.put(parent, md);
+ancestorsToAdd.add(md);
 parent = parent.getParent();
   }
 }
-return ancestry.values();
+return ancestorsToAdd;
+  }
+
+  /**
+   * {@inheritDoc}
+   * 
+   * if {@code operationState} is not null, when this method returns the
+   * operation state will be updated with all new entries created.
+   * This ensures that subsequent operations with the same store will not
+   * trigger new updates.
+   * The scan on
 
 Review comment:
   cut


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-17 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r294314157
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -1134,16 +1147,19 @@ public int run(String[] args, PrintStream out)
   }
   String s3Path = paths.get(0);
   CommandFormat commands = getCommandFormat();
+  URI fsURI = toUri(s3Path);
 
   // check if UNGUARDED_FLAG is passed and use NullMetadataStore in
   // config to avoid side effects like creating the table if not exists
+  Configuration conf0 = getConf();
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] rlevas commented on issue #939: HADOOP-16340. ABFS driver continues to retry on IOException responsesfrom REST operations

2019-06-17 Thread GitBox
rlevas commented on issue #939: HADOOP-16340. ABFS driver continues to retry on 
IOException responsesfrom REST operations
URL: https://github.com/apache/hadoop/pull/939#issuecomment-502681880
 
 
   @DadanielZ , Thanks for the review and help to push this along. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or 
evictable size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#issuecomment-50282
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1390 | trunk passed |
   | +1 | compile | 56 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 55 | trunk passed |
   | +1 | shadedclient | 886 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 32 | trunk passed |
   | 0 | spotbugs | 165 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 161 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 53 | the patch passed |
   | +1 | compile | 48 | the patch passed |
   | +1 | javac | 48 | the patch passed |
   | +1 | checkstyle | 19 | the patch passed |
   | +1 | mvnsite | 51 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 846 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 32 | the patch passed |
   | +1 | findbugs | 167 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 63 | hadoop-hdfs-client in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4082 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestTokenAspect |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/977 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6d73f1cadc93 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 304a47e |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/1/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-17 Thread GitBox
hadoop-yetus commented on a change in pull request #977: (HDFS-14541)when 
evictableMmapped or evictable size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#discussion_r294277928
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 ##
 @@ -540,16 +538,18 @@ private void trimEvictionMaps() {
 return;
   }
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  // maxTotalSize > 0
+  // if evictableSize == 0, evictableMmappedSize > 0 evictableMmapped do 
not
+  // throw NoSuchElementException
+  // if evictableMmappedSize == 0, evictableSize > 0 evictable do not throw
+  // NoSuchElementException
+  if (evictableSize == 0) {
+replica = (ShortCircuitReplica) evictableMmapped
+.get(evictableMmapped.firstKey());
+  } else {
+replica = (ShortCircuitReplica) evictable.get(evictable.firstKey());
   }
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #974: HDFS-9913.DispCp doesn't use Trash with -delete option

2019-06-17 Thread GitBox
hadoop-yetus commented on issue #974: HDFS-9913.DispCp doesn't use Trash with 
-delete option
URL: https://github.com/apache/hadoop/pull/974#issuecomment-502661748
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1238 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   | 0 | spotbugs | 46 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 42 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 27 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | -0 | checkstyle | 16 | hadoop-tools/hadoop-distcp: The patch generated 1 
new + 136 unchanged - 0 fixed = 137 total (was 136) |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 811 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | the patch passed |
   | +1 | findbugs | 49 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 892 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4232 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-974/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/974 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 46a9b8f0b152 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 304a47e |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-974/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-974/2/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-974/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #925: HDDS-1660 Use Picocli for Ozone Manager

2019-06-17 Thread GitBox
elek closed pull request #925: HDDS-1660 Use Picocli for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] leosunli opened a new pull request #977: when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-17 Thread GitBox
leosunli opened a new pull request #977: when evictableMmapped or evictable 
size is zero, do not throw NoSuchE…
URL: https://github.com/apache/hadoop/pull/977
 
 
   …lementException
   
   Signed-off-by: sunlisheng 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on issue #925: HDDS-1660 Use Picocli for Ozone Manager

2019-06-17 Thread GitBox
sodonnel commented on issue #925: HDDS-1660 Use Picocli for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925#issuecomment-502625620
 
 
   There are two failing tests:
   
   1) 
org.apache.hadoop.ozone.client.rpc.TestBCSID.org.apache.hadoop.ozone.client.rpc.TestBCSID
   
   This passes locally on trunk and on the branch with this change.
   
   2) org.apache.hadoop.hdds.scm.node.TestNodeReportHandler.testNodeReport
   
   This fails on trunk and on branch. I believe the test is broken by 
HDDS-1663, as it added the line:
   
   ```
   this.clusterMap = scmManager.getClusterMap();
   ```
   
   And the test passes a null scmManager into the constructor, leading to the 
null pointer exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >