[jira] [Resolved] (HADOOP-16335) ERROR: namenode can only be executed by root while executing "hdfs namenode -format"

2019-08-14 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HADOOP-16335.
---
Resolution: Invalid

I think this is coufiguration issue. If you define env var 
HDFS_NAMENODE_USER=root in hadoop-env.sh or somewhere and run the command by 
another user, you will get the error.
{noformat}
$ HDFS_NAMENODE_USER=root bin/hadoop namenode -format
WARNING: Use of this script to execute namenode is deprecated.
WARNING: Attempting to execute replacement "hdfs namenode" instead.

ERROR: namenode can only be executed by root.
{noformat}
I'm closing this as invalid. Please reopen if this turns out to be bug in 
scripts.

> ERROR: namenode can only be executed by root while executing "hdfs namenode 
> -format"
> 
>
> Key: HADOOP-16335
> URL: https://issues.apache.org/jira/browse/HADOOP-16335
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lakshmi Narayanan G
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907832#comment-16907832
 ] 

Hadoop QA commented on HADOOP-16416:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 20 unchanged - 0 fixed = 21 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977669/HADOOP-16416.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 96354dcbcc03 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 167acd8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16481/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16481/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 

[GitHub] [hadoop] bshashikant closed pull request #1133: HDDS-1836. Change the default value of ratis leader election min timeout to a lower value

2019-08-14 Thread GitBox
bshashikant closed pull request #1133: HDDS-1836. Change the default value of 
ratis leader election min timeout to a lower value
URL: https://github.com/apache/hadoop/pull/1133
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-14 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907804#comment-16907804
 ] 

kevin su commented on HADOOP-16416:
---

[~gabor.bota] Thanks for the help, I just uploaded the patch.

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-14 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HADOOP-16416:
--
Attachment: HADOOP-16416.003.patch

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw opened a new pull request #1299: HDDS-1959. Decrement purge interval for Ratis logs

2019-08-14 Thread GitBox
pingsutw opened a new pull request #1299: HDDS-1959. Decrement purge interval 
for Ratis logs
URL: https://github.com/apache/hadoop/pull/1299
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1298: HADOOP-16061. Upgrade Yetus to 0.10.0

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1298: HADOOP-16061. Upgrade Yetus to 0.10.0
URL: https://github.com/apache/hadoop/pull/1298#issuecomment-521495955
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 782 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 1708 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1298/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1298 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 8fdd68837b76 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 167acd8 |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1298/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-08-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16061:
---
Summary: Update Apache Yetus to 0.10.0  (was: Update Apache Yetus to 
0.9.0)
Description: Yetus 0.10.0 is out. Let's upgrade.  (was: Yetus 0.9.0 is out. 
Let's upgrade.)

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #1298: HADOOP-16061. Upgrade Yetus to 0.10.0

2019-08-14 Thread GitBox
aajisaka opened a new pull request #1298: HADOOP-16061. Upgrade Yetus to 0.10.0
URL: https://github.com/apache/hadoop/pull/1298
 
 
   JIRA: https://issues.apache.org/jira/browse/HADOOP-16061


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16061) Update Apache Yetus to 0.9.0

2019-08-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16061:
--

Assignee: Akira Ajisaka

> Update Apache Yetus to 0.9.0
> 
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Yetus 0.9.0 is out. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1297: HDFS-14729. Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-14 Thread GitBox
vivekratnavel commented on issue #1297: HDFS-14729. Upgrade Bootstrap and 
jQuery versions used in HDFS UIs
URL: https://github.com/apache/hadoop/pull/1297#issuecomment-521461675
 
 
   @jnp @anuengineer @sunilgovind @vinoduec Please review when you find time


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #1297: HDFS-14729. Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-14 Thread GitBox
vivekratnavel opened a new pull request #1297: HDFS-14729. Upgrade Bootstrap 
and jQuery versions used in HDFS UIs
URL: https://github.com/apache/hadoop/pull/1297
 
 
   This patch updates the Bootstrap and jQuery versions used by different HDFS 
UIs like Namenode web UI, Datanode web UI etc.
   
   Bootstrap 3.3.7 -> 3.4.1
   jQuery 3.3.1 -> 3.4.1
   
   Testing done:
   I tested the patch locally by bringing up HDFS in a [pseudo distributed 
mode](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation)
 and manually browsing through UI components in localhost and made sure that 
there are no compatibility issues or errors thrown in the browser console. And 
I did not see any major change in the UI presentation as well.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16516) Upgrade Wildfly-11.0.0.Beta1 to a stable version with no CVEs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HADOOP-16516:
---

 Summary: Upgrade Wildfly-11.0.0.Beta1 to a stable version with no 
CVEs
 Key: HADOOP-16516
 URL: https://issues.apache.org/jira/browse/HADOOP-16516
 Project: Hadoop Common
  Issue Type: Task
  Components: tools
Reporter: Vivek Ratnavel Subramanian


The transitive dependency Wildfly-11.0.0Beta1 brought in by 
azure-data-lake-store-sdk 2.3.3 has 3 known medium severity CVEs and needs to 
be upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1296: HDDS-1969. Implement OM GetDelegationToken request to use Cache and DoubleBuffer.

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1296: HDDS-1969. Implement OM 
GetDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1296#issuecomment-521437000
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 602 | trunk passed |
   | +1 | compile | 365 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 616 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 566 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | cc | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1653 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7359 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1296/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1296 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux b85568b6c30b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 167acd8 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1296/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1296/1/testReport/ |
   | Max. process+thread count | 5403 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1296/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1028: HDFS-14617 - Improve fsimage load time 
by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#issuecomment-521401385
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 54 | trunk passed |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 62 | trunk passed |
   | +1 | shadedclient | 714 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 47 | trunk passed |
   | 0 | spotbugs | 157 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 156 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 55 | the patch passed |
   | +1 | compile | 51 | the patch passed |
   | +1 | javac | 51 | the patch passed |
   | -0 | checkstyle | 48 | hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 605 unchanged - 3 fixed = 606 total (was 608) |
   | +1 | mvnsite | 59 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 47 | the patch passed |
   | +1 | findbugs | 164 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 6384 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 9775 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
   |   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1028 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux bd793cb1fcaf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c720441 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/testReport/ |
   | Max. process+thread count | 3969 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] bharatviswa504 opened a new pull request #1296: HDDS-1969. Implement OM GetDelegationToken request to use Cache and DoubleBuffer.

2019-08-14 Thread GitBox
bharatviswa504 opened a new pull request #1296: HDDS-1969. Implement OM 
GetDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1296
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-08-14 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907568#comment-16907568
 ] 

Erik Krogen commented on HADOOP-16391:
--

Hi [~BilwaST], thanks for continuing to work on this :) The checkstlye and 
whitespace issues seem legitimate.

Besides those, I had one thought regarding the changes to {{MutableRate}}. The 
new constructor basically makes it no longer a {{MutableRate}}, just a 
{{MutableStat}} – the only difference between the two is that {{MutableRate}} 
enforces the "Ops"/"Time" names and this constructor removes that convention. 
Instead, would it make more sense to leave {{MutableRate}} unchanged and then 
do this:
{code:java|title=MutableRatesWithAggregation}
  metric = new MutableRate(name + typePrefix, name + typePrefix, false);
{code}
The output will look basically the same, with names like 
"GetLongDeferredNumOps" instead of "GetLongNumDeferredOps", and seems a change 
more in line with the intent of {{MutableRate}}. What do you think?

Also, the new test looks great, but can we also add an assertion that there is 
a call for the deferred method name as well? You also have a typo: "Deferrred" 
(three r's)

> Duplicate values in rpcDetailedMetrics
> --
>
> Key: HADOOP-16391
> URL: https://issues.apache.org/jira/browse/HADOOP-16391
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-16391-001.patch, HADOOP-16391-002.patch, 
> image-2019-06-25-20-30-15-395.png, screenshot-1.png, screenshot-2.png
>
>
> In RpcDetailedMetrics init is called two times . Once for deferredRpcrates 
> and other one rates metrics which causes duplicate values in RM and NM 
> metrics.
>  !image-2019-06-25-20-30-15-395.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #960: HDDS-1679. debug patch

2019-08-14 Thread GitBox
anuengineer closed pull request #960: HDDS-1679. debug patch
URL: https://github.com/apache/hadoop/pull/960
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #960: HDDS-1679. debug patch

2019-08-14 Thread GitBox
anuengineer commented on issue #960: HDDS-1679. debug patch
URL: https://github.com/apache/hadoop/pull/960#issuecomment-521355573
 
 
   @mukul1987  I presuming that this is not a valid patch anymore. I am going 
to close this pull request under that assumption. If needed please push another 
patch or send another pull request. This has been marked as abandoned in the 
JIRA, hence doing the same here to stop it showing up in the review queue. 
@arp7 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #1270: HDFS-14718. HttpFS: Sort response by key names as WebHDFS does

2019-08-14 Thread GitBox
smengcl commented on a change in pull request #1270: HDFS-14718. HttpFS: Sort 
response by key names as WebHDFS does
URL: https://github.com/apache/hadoop/pull/1270#discussion_r314000672
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 ##
 @@ -280,14 +279,14 @@ private static Map contentSummaryToJSON(ContentSummary 
contentSummary) {
*/
   @SuppressWarnings({"unchecked"})
   private static Map quotaUsageToJSON(QuotaUsage quotaUsage) {
-Map response = new LinkedHashMap();
+Map response = new TreeMap();
 Map quotaUsageMap = quotaUsageToMap(quotaUsage);
 response.put(HttpFSFileSystem.QUOTA_USAGE_JSON, quotaUsageMap);
 return response;
   }
 
   private static Map quotaUsageToMap(QuotaUsage quotaUsage) {
-Map result = new LinkedHashMap<>();
+Map result = new TreeMap<>();
 
 Review comment:
   You are right. People won't care in production.
   I filed this jira just because I was comparing the RAW JSON, and they 
weren't in the same order.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #1270: HDFS-14718. HttpFS: Sort response by key names as WebHDFS does

2019-08-14 Thread GitBox
smengcl commented on a change in pull request #1270: HDFS-14718. HttpFS: Sort 
response by key names as WebHDFS does
URL: https://github.com/apache/hadoop/pull/1270#discussion_r314000672
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 ##
 @@ -280,14 +279,14 @@ private static Map contentSummaryToJSON(ContentSummary 
contentSummary) {
*/
   @SuppressWarnings({"unchecked"})
   private static Map quotaUsageToJSON(QuotaUsage quotaUsage) {
-Map response = new LinkedHashMap();
+Map response = new TreeMap();
 Map quotaUsageMap = quotaUsageToMap(quotaUsage);
 response.put(HttpFSFileSystem.QUOTA_USAGE_JSON, quotaUsageMap);
 return response;
   }
 
   private static Map quotaUsageToMap(QuotaUsage quotaUsage) {
-Map result = new LinkedHashMap<>();
+Map result = new TreeMap<>();
 
 Review comment:
   You are right. People won't care in production.
   I filed this jira just because I was comparing the RAW JSON.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #1212: YARN-9683 Remove reapDockerContainerNoPid left behind by YARN-9074

2019-08-14 Thread GitBox
jojochuang merged pull request #1212: YARN-9683 Remove reapDockerContainerNoPid 
left behind by YARN-9074
URL: https://github.com/apache/hadoop/pull/1212
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
jojochuang commented on issue #1028: HDFS-14617 - Improve fsimage load time by 
writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#issuecomment-521342862
 
 
   manually trigger a precommit rebuild


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai closed pull request #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
adoroszlai closed pull request #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313993233
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -255,14 +345,28 @@ public int compare(FileSummary.Section s1, 
FileSummary.Section s2) {
 case INODE: {
   currentStep = new Step(StepType.INODES);
   prog.beginStep(Phase.LOADING_FSIMAGE, currentStep);
-  inodeLoader.loadINodeSection(in, prog, currentStep);
+  stageSubSections = getSubSectionsOfName(
+  subSections, SectionName.INODE_SUB);
+  if (loadInParallel && (stageSubSections.size() > 0)) {
+inodeLoader.loadINodeSectionInParallel(executorService,
+stageSubSections, summary.getCodec(), prog, currentStep);
+  } else {
+inodeLoader.loadINodeSection(in, prog, currentStep);
+  }
 }
   break;
 case INODE_REFERENCE:
   snapshotLoader.loadINodeReferenceSection(in);
   break;
 case INODE_DIR:
-  inodeLoader.loadINodeDirectorySection(in);
+  stageSubSections = getSubSectionsOfName(
+  subSections, SectionName.INODE_DIR_SUB);
+  if (loadInParallel && stageSubSections.size() > 0) {
 
 Review comment:
   I have a unit test which ensures a parallel image can be created and loaded. 
It would be fairly easy to create another test which generates a non-parallel 
image, validates it is non-parallel and then attempt to load it with parallel 
enabled. Do you think that would cover what we need?
   I also have a test that enables parallel and compression and then it 
verifies the parallel part is not used as compression disables it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521340758
 
 
   Thanks @anuengineer (82420851645f1644f597e11e14a1d70bb8a7cc23) and 
@nandakumar131 (b1e4eeef59632ca127f6dded46bde3af2ee8558b) for committing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313991605
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
 
 Review comment:
   @Hexiaoqiao Are you happy if we leave the synchronized blocks in place for 
the single threaded case? I don't believe it will cause any performance issues 
and it makes the code much cleaner than having two different paths for parallel 
and serial.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1270: HDFS-14718. HttpFS: Sort response by key names as WebHDFS does

2019-08-14 Thread GitBox
jojochuang commented on a change in pull request #1270: HDFS-14718. HttpFS: 
Sort response by key names as WebHDFS does
URL: https://github.com/apache/hadoop/pull/1270#discussion_r313986113
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 ##
 @@ -280,14 +279,14 @@ private static Map contentSummaryToJSON(ContentSummary 
contentSummary) {
*/
   @SuppressWarnings({"unchecked"})
   private static Map quotaUsageToJSON(QuotaUsage quotaUsage) {
-Map response = new LinkedHashMap();
+Map response = new TreeMap();
 Map quotaUsageMap = quotaUsageToMap(quotaUsage);
 response.put(HttpFSFileSystem.QUOTA_USAGE_JSON, quotaUsageMap);
 return response;
   }
 
   private static Map quotaUsageToMap(QuotaUsage quotaUsage) {
-Map result = new LinkedHashMap<>();
+Map result = new TreeMap<>();
 
 Review comment:
   I honestly don't think people care about the order.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1252: HDFS-14678. Allow triggerBlockReport to a specific namenode.

2019-08-14 Thread GitBox
jojochuang commented on a change in pull request #1252: HDFS-14678. Allow 
triggerBlockReport to a specific namenode.
URL: https://github.com/apache/hadoop/pull/1252#discussion_r313981645
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 ##
 @@ -3311,10 +3311,13 @@ public ReconfigurationTaskStatus 
getReconfigurationStatus() throws IOException {
   public void triggerBlockReport(BlockReportOptions options)
   throws IOException {
 checkSuperuserPrivilege();
+InetSocketAddress namenodeAddr = options.getNamenodeAddr();
 for (BPOfferService bpos : blockPoolManager.getAllNamenodeThreads()) {
   if (bpos != null) {
 for (BPServiceActor actor : bpos.getBPServiceActors()) {
-  actor.triggerBlockReport(options);
+  if (namenodeAddr == null || namenodeAddr.equals(actor.nnAddr)) {
 
 Review comment:
   Can we make this condition more readable? 
   Like create a boolean shouldTriggerBlockReportForAllNameNodes = 
(namenodeAddr == null)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1295: HDDS-1966. Wrong expected key ACL in acceptance test

2019-08-14 Thread GitBox
adoroszlai commented on issue #1295: HDDS-1966. Wrong expected key ACL in 
acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521326052
 
 
   Thanks @anuengineer and @nandakumar131 for commiting it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-14 Thread GitBox
bharatviswa504 commented on a change in pull request #1204: HDDS-1768. Audit 
xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313978226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   My comment is to change only modify the auditAcl as below.
   
   private void auditAcl(OzoneObj ozoneObj, List ozoneAcl,
 OMAction omAction, Exception ex) {
   Map auditMap = ozoneObj.toAuditMap();
   if(ozoneAcl != null) {
 auditMap.put(OzoneConsts.ACL, ozoneAcl.toString());
   }
   
   if(exception == null) {
 AUDIT.logWriteSuccess(
 buildAuditMessageForSuccess(omAction, auditMap));
   } else {
 AUDIT.logWriteFailure(
 buildAuditMessageForFailure(omAction, auditMap, ex));
   }
 }
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-14 Thread GitBox
bharatviswa504 commented on a change in pull request #1204: HDDS-1768. Audit 
xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313978226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   My comment is to change only modify the auditAcl as below.
   
   ```
   private void auditAcl(OzoneObj ozoneObj, List ozoneAcl,
 OMAction omAction, Exception ex) {
   Map auditMap = ozoneObj.toAuditMap();
   if(ozoneAcl != null) {
 auditMap.put(OzoneConsts.ACL, ozoneAcl.toString());
   }
   
   if(exception == null) {
 AUDIT.logWriteSuccess(
 buildAuditMessageForSuccess(omAction, auditMap));
   } else {
 AUDIT.logWriteFailure(
 buildAuditMessageForFailure(omAction, auditMap, ex));
   }
 }
   ```
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1295: HDDS-1966. Wrong expected key ACL in acceptance test

2019-08-14 Thread GitBox
anuengineer closed pull request #1295: HDDS-1966. Wrong expected key ACL in 
acceptance test
URL: https://github.com/apache/hadoop/pull/1295
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521323896
 
 
   @smengcl @anuengineer please review
   
   Here are the fixed unit tests:
   
https://ci.anzix.net/job/ozone/17670/testReport/org.apache.hadoop.ozone.s3/TestOzoneClientProducer/
   
   Failed acceptance test is being fixed in #1295.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Siddharth Seth (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907417#comment-16907417
 ] 

Siddharth Seth commented on HADOOP-16505:
-

[~viczsaurav] - any thoughts on how this compares to 
https://issues.apache.org/jira/browse/HADOOP-16445

That sets up a new config to register "signerName:signerClass" pairs, instead 
of re-using the current config to allow class names.

> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: HADOOP-16505.patch, hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Saurav Verma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907379#comment-16907379
 ] 

Saurav Verma commented on HADOOP-16505:
---

Thanks [~jojochuang] and [~gabor.bota] for checking it.
https://github.com/apache/hadoop/pull/1280 is indeed the PR for the same issue. 
I have written the unit test and was able to test it for eu-central-1 region. 
Please check in the PR.
I am just running  running {{dev-support/bin/test-patch}} locally once

> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: HADOOP-16505.patch, hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521304482
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 595 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 935 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 633 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 375 | the patch passed |
   | +1 | javac | 375 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 726 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 367 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2560 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 47d0c0356561 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83e452e |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/testReport/ |
   | Max. process+thread count | 4873 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth merged pull request #1261: YARN-9676. Add DEBUG and TRACE level messages to AppLogAggregatorImpl…

2019-08-14 Thread GitBox
szilard-nemeth merged pull request #1261: YARN-9676. Add DEBUG and TRACE level 
messages to AppLogAggregatorImpl…
URL: https://github.com/apache/hadoop/pull/1261
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on issue #1261: YARN-9676. Add DEBUG and TRACE level messages to AppLogAggregatorImpl…

2019-08-14 Thread GitBox
szilard-nemeth commented on issue #1261: YARN-9676. Add DEBUG and TRACE level 
messages to AppLogAggregatorImpl…
URL: https://github.com/apache/hadoop/pull/1261#issuecomment-521298262
 
 
   Hi @adamantal !
   Thanks for this PR!
   Looks good to me, +1!
   Merging it in.
   Will try to backport to branch-3.2 and branch-3.1, see updates in jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313941659
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
+loadRootINode(p);
+  }
 } else {
   INode n = loadINode(p);
-  dir.addToInodeMap(n);
+  synchronized(this) {
+dir.addToInodeMap(n);
+  }
+}
+cntr ++;
+if (counter != null) {
+  counter.increment();
 }
-counter.increment();
   }
+  return cntr;
+}
+
+
+private void loadINodeSectionHeader(InputStream in, StartupProgress prog,
+Step currentStep) throws IOException {
+  INodeSection s = INodeSection.parseDelimitedFrom(in);
+  fsn.dir.resetLastInodeId(s.getLastInodeId());
+  long numInodes = s.getNumInodes();
+  LOG.info("Loading " + numInodes + " INodes.");
+  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+}
+
+void loadINodeSectionInParallel(ExecutorService service,
+ArrayList sections,
+String compressionCodec, StartupProgress prog,
+Step currentStep) throws IOException {
+  LOG.info("Loading the INode section in parallel with {} sub-sections",
+  sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  AtomicInteger totalLoaded = new AtomicInteger(0);
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+
+  for (int i=0; i < sections.size(); i++) {
+FileSummary.Section s = sections.get(i);
+InputStream ins = parent.getInputStreamForSection(s, compressionCodec);
+if (i == 0) {
+  // The first inode section has a header which must be processed first
+  loadINodeSectionHeader(ins, prog, currentStep);
+}
+
+service.submit(new Runnable() {
+   public void run() {
+try {
+   totalLoaded.addAndGet(loadINodesInSection(ins, null));
+   

[GitHub] [hadoop] hadoop-yetus commented on issue #1295: HDDS-1966. Wrong expected key ACL in acceptance test

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1295: HDDS-1966. Wrong expected key ACL in 
acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521289037
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 553 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 362 | hadoop-hdds in the patch passed. |
   | -1 | unit | 656 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5172 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1295 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux f6466b75cfe5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83e452e |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 closed pull request #1281: HDDS-1955. TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error. Contributed by Mukul Kumar Singh.

2019-08-14 Thread GitBox
nandakumar131 closed pull request #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907281#comment-16907281
 ] 

Gabor Bota commented on HADOOP-16505:
-

Thanks for working on this [~viczsaurav], and [~jojochuang] to notifying us!

I think that the PR must be the current, because there is at least one test for 
this included in that.

[~viczsaurav], could you include an integration test for this change? Also 
please run all integration test against an aws endpoint with these parameters 
at least: {{mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Ds3guard 
-Ddynamo}}, and tell us if it was successful. It would nice to show that the 
tests won't fail with the signer changed.


> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: HADOOP-16505.patch, hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1295: HDDS-1966. Wrong expected key ACL in acceptance test

2019-08-14 Thread GitBox
adoroszlai commented on issue #1295: HDDS-1966. Wrong expected key ACL in 
acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521252124
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1295: HDDS-1966. Wrong expected key ACL in acceptance test

2019-08-14 Thread GitBox
adoroszlai opened a new pull request #1295: HDDS-1966. Wrong expected key ACL 
in acceptance test
URL: https://github.com/apache/hadoop/pull/1295
 
 
   ## What changes were proposed in this pull request?
   
   Acceptance test [fails at ACL 
checks](https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2):
   
   ```
   [ {
 "type" : "USER",
 "name" : "testuser/s...@example.com",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "GROUP",
 "name" : "root",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "GROUP",
 "name" : "superuser1",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "USER",
 "name" : "superuser1",
 "aclScope" : "ACCESS",
 "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
   } ]' does not match '"type" : "GROUP",
   .*"name" : "superuser1*",
   .*"aclScope" : "ACCESS",
   .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
   ```
   
   The test [sets user 
ACL](https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123),
 but [checks group 
ACL](https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125).
  I think this passed previously due to a bug that was 
[fixed](https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31)
 by [HDDS-1917](https://issues.apache.org/jira/browse/HDDS-1917).
   
   https://issues.apache.org/jira/browse/HDDS-1966
   
   ## How was this patch tested?
   
   Ran `ozonesecure` acceptance test, verified that key ACL checks were passing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1281: HDDS-1955. TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error. Contributed by Mukul Kumar Singh.

2019-08-14 Thread GitBox
nandakumar131 commented on issue #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281#issuecomment-521247411
 
 
   Failures are not related. They are because of #1293


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover 
ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521244832
 
 
   Thanks @nandakumar131 for quick review and commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
nandakumar131 merged pull request #1293: HDDS-1965. Compile error due to 
leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1281: HDDS-1955. TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error. Contributed by Mukul Kumar Singh.

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281#issuecomment-521226293
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1457 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 136 | hadoop-ozone in trunk failed. |
   | -1 | compile | 50 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 838 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 196 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 100 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 133 | hadoop-ozone in the patch failed. |
   | -1 | compile | 51 | hadoop-ozone in the patch failed. |
   | -1 | javac | 51 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | -1 | findbugs | 102 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-hdds in the patch passed. |
   | -1 | unit | 323 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 5512 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 759520ee2bf7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/testReport/ |
   | Max. process+thread count | 515 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover 
ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521222860
 
 
   @nandakumar131 please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907217#comment-16907217
 ] 

Wei-Chiu Chuang commented on HADOOP-16505:
--

[~gabor.bota] [~ste...@apache.org] would you like to review this one?
Note there's a PR https://github.com/apache/hadoop/pull/1280 open for the same. 
I'm not sure which one is current.

> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: HADOOP-16505.patch, hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request #1294: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-14 Thread GitBox
jojochuang opened a new pull request #1294: HDFS-14665. HttpFS: LISTSTATUS 
response is missing HDFS-specific fields
URL: https://github.com/apache/hadoop/pull/1294
 
 
   Forked from PR #1291


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907191#comment-16907191
 ] 

Hadoop QA commented on HADOOP-15565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m  4s{color} 
| {color:red} root generated 15 new + 1457 unchanged - 15 fixed = 1472 total 
(was 1472) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} root: The patch generated 0 new + 294 unchanged - 3 
fixed = 294 total (was 297) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}244m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-15565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977561/HADOOP-15565.0006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| 

[GitHub] [hadoop] adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
adoroszlai commented on issue #1293: HDDS-1965. Compile error due to leftover 
ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521206661
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1293: HDDS-1965. Compile error due to leftover 
ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521205005
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | -1 | dupname | 0 | The patch has 1  duplicated filenames that differ only 
in case. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1293 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f65e4a4ae90b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | dupname | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/artifact/out/dupnames.txt
 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1293: HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread GitBox
adoroszlai opened a new pull request #1293: HDDS-1965. Compile error due to 
leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293
 
 
   ## What changes were proposed in this pull request?
   
   Typo in class name of `ScmBlockLocationTestingClient` was fixed in 
5a248de5115, but the original file is still present in the repo, causing 
compile error.
   
   https://issues.apache.org/jira/browse/HDDS-1965


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907175#comment-16907175
 ] 

Hadoop QA commented on HADOOP-16505:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 17 
new + 7 unchanged - 1 fixed = 24 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16505 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977573/HADOOP-16505.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47c118e0a439 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e4b757 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16480/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16480/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16480/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 308 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521201591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 135 | hadoop-ozone in trunk failed. |
   | -1 | compile | 48 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 837 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 192 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 99 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 136 | hadoop-ozone in the patch failed. |
   | -1 | compile | 50 | hadoop-ozone in the patch failed. |
   | -1 | javac | 50 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | -1 | findbugs | 100 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 104 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 3856 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ebf668536fa9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/testReport/ |
   | Max. process+thread count | 520 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-14 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907171#comment-16907171
 ] 

Gabor Bota commented on HADOOP-16416:
-

Thanks for working on this [~pingsutw]!

Please note that we use underscores in our constant names instead of camelcase, 
but to still separate the words from each other.
For reference check e.g {{org.apache.hadoop.fs.s3a.Constants}}.
In this case, deleteTrackingValueMap would be DELETE_TRACKING_VALUE_MAP.

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1226: HDDS-1610. applyTransaction failure 
should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-521192456
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | -1 | mvninstall | 141 | hadoop-ozone in trunk failed. |
   | -1 | compile | 49 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 915 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 201 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 101 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 137 | hadoop-ozone in the patch failed. |
   | -1 | compile | 52 | hadoop-ozone in the patch failed. |
   | -1 | cc | 52 | hadoop-ozone in the patch failed. |
   | -1 | javac | 52 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 101 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdds in the patch passed. |
   | -1 | unit | 329 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4445 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1226 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 960f8c74122e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313806350
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -525,6 +689,59 @@ long save(File file, FSImageCompression compression) 
throws IOException {
   }
 }
 
+private void enableSubSectionsIfRequired() {
+  boolean parallelEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_DEFAULT);
+  int inodeThreshold = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_DEFAULT);
+  int targetSections = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT);
+  boolean compressionEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY,
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_DEFAULT);
+
+  if (parallelEnabled) {
+if (compressionEnabled) {
+  LOG.warn("Parallel Image loading is not supported when {} is set to" 
+
+  " true. Parallel loading will be disabled.",
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY);
+  writeSubSections = false;
+  return;
+}
+if (targetSections <= 0) {
+  LOG.warn("{} is set to {}. It must be greater than zero. Setting to" 
+
+  "default of {}",
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY,
+  targetSections,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT);
+  targetSections =
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT;
+}
+if (inodeThreshold <= 0) {
+  LOG.warn("{} is set to {}. It must be greater than zero. Setting to" 
+
+  "default of {}",
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313806054
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -294,6 +368,19 @@ public int compare(FileSummary.Section s1, 
FileSummary.Section s2) {
* a particular step to be started for once.
*/
   Step currentStep = null;
+  boolean loadInParallel =
+  conf.getBoolean(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_DEFAULT);
+  // TODO - check for compression and if enabled disable parallel
+
+  ExecutorService executorService = null;
+  ArrayList subSections =
+  getAndRemoveSubSections(sections);
+  if (loadInParallel) {
+executorService = Executors.newFixedThreadPool(
+conf.getInt(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY,
+DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_DEFAULT));
 
 Review comment:
   I added a log message here, and validated the thread count setting is not 
less than 1, otherwise I reset it to the default (4). I also pulled the code to 
do this check and to create the executor into a private method to reduce the 
noise in the already too long loadInternal method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2019-08-14 Thread xinzhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907143#comment-16907143
 ] 

xinzhang commented on HADOOP-11461:
---

[~gtCarrera9]

thanks . (y)

Good tips.

> Namenode stdout log contains IllegalAccessException
> ---
>
> Key: HADOOP-11461
> URL: https://issues.apache.org/jira/browse/HADOOP-11461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Major
>
> We frequently see the following exception in namenode out log file.
> {noformat}
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> attachTypes
> INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
> resolve
> SEVERE: null
> java.lang.IllegalAccessException: Class 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
> not access a member of class javax.ws.rs.co
> re.Response with modifiers "protected"
> at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
> at java.lang.Class.newInstance(Class.java:368)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
> at 
> com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
> at 
> com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
> at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
> at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> 

[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313799031
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +272,147 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 
 Review comment:
   I have added a message like this to both adding the inode and inode 
references to the directory:
   ```
   LOG.warn("Failed to add the inode reference {} to the directory {}",
   ref.getId(), p.getId());
   ```
   I opted to log only the inode and directory "inode id" as I am not sure if 
the system will be able to resolve the full path of an inode or directory at 
this stage, as it is still loading the image. Also this "should never happen" 
so hopefully we will not see these messages in practice, but if we do, it will 
likely require manually investigating an image corruption, so the ID numbers 
should be enough to start with that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Saurav Verma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saurav Verma updated HADOOP-16505:
--
Attachment: HADOOP-16505.patch
Status: Patch Available  (was: Open)

> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: HADOOP-16505.patch, hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16505) Add ability to register custom signer with AWS SignerFactory

2019-08-14 Thread Saurav Verma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saurav Verma updated HADOOP-16505:
--
Status: Open  (was: Patch Available)

Submitting another patch

> Add ability to register custom signer with AWS SignerFactory
> 
>
> Key: HADOOP-16505
> URL: https://issues.apache.org/jira/browse/HADOOP-16505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Saurav Verma
>Assignee: Saurav Verma
>Priority: Major
> Attachments: hadoop-16505-1.patch
>
>
> Currently, the AWS SignerFactory restricts the class of Signer algorithms 
> that can be used. 
> We require an ability to register a custom Signer. The SignerFactory supports 
> this functionality through its {{registerSigner}} method. 
> By providing a fully qualified classname to the existing parameter 
> {{fs.s3a.signing-algorithm}}, the custom signer can be registered.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
adoroszlai commented on issue #1292: HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521183613
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1292: HDDS-1964. TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread GitBox
adoroszlai opened a new pull request #1292: HDDS-1964. TestOzoneClientProducer 
fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292
 
 
   ## What changes were proposed in this pull request?
   
   `TestOzoneClientProducer` verifies that `RpcClient` cannot be created 
because OM address is not configured.  The call to `producer.createClient()` is 
expected to fail with the message `Couldn't create protocol`, which is 
triggered by `IllegalArgumentException: Could not find any configured addresses 
for OM. Please configure the system with ozone.om.address`.  
bf457797f607f3aeeb2292e63f440cb13e15a2d9 added the default address as 
explicitly configured value, so client creation now progresses further and 
fails when it cannot connect to OM (which is not started by the unit test).
   
   This change simply sets the previous empty OM address for this test.
   
   It also adds log4j config for `s3gateway` tests to produce better output 
next time, because 
[currently](https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer-output.txt)
 it is not very helpful.
   
   https://issues.apache.org/jira/browse/HDDS-1964
   
   ## How was this patch tested?
   
   ```
   $ mvn -Phdds -pl :hadoop-ozone-s3gateway test
   ...
   [INFO] Tests run: 77, Failures: 0, Errors: 0, Skipped: 0
   ...
   [INFO] BUILD SUCCESS
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r313794899
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +272,147 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on issue #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-14 Thread GitBox
sodonnel commented on issue #1028: HDFS-14617 - Improve fsimage load time by 
writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#issuecomment-521180160
 
 
   > And we should make sure oiv tool works with this change. We can file 
another jira to address the oiv issue.
   
   I checked OIV, and it can load the images with parallel sections in the 
image index with no problems and it does not produce any warnings. The reason, 
is that this change simply adds additional sections to the image index, so we 
still have:
   
   ```
   INODE START_OFFSET LENGTH
 INODE_SUB START_OFFSET LENGTH
 INODE_SUB START_OFFSET LENGTH
 INODE_SUB START_OFFSET LENGTH
 ...
   INODE_DIR START_OFFSET LENGTH
 INODE_DIR_SUB START_OFFSET LENGTH
 INODE_DIR_SUB START_OFFSET LENGTH
 INODE_DIR_SUB START_OFFSET LENGTH
 ...
   ```
   
   This means that if a loader looks for certain sections, it does not matter 
which other sections are there, provided it ignores them. In the case of the 
OIV for the "delimited" processor, it uses this pattern:
   
   ```
   for (FileSummary.Section section : sections) {
 if (SectionName.fromString(section.getName()) == SectionName.INODE) {
   fin.getChannel().position(section.getOffset());
   is = FSImageUtil.wrapInputStreamForCompression(conf,
   summary.getCodec(), new BufferedInputStream(new LimitInputStream(
   fin, section.getLength(;
   outputINodes(is);
 }
   }
   ```
   
   It loops over all the sections in the "FileSummary Index" looking for one it 
ones (INODE in the above example) and then ignore all others.
   
   In the case of the XML processor, which is probably the most important, it 
works in a very similar way to how the namenode loads the image. It loops over 
all sections and uses a case statement to process the sections it is interested 
in, and skips others:
   
   ```
for (FileSummary.Section s : sections) {
   fin.getChannel().position(s.getOffset());
   InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
   summary.getCodec(), new BufferedInputStream(new LimitInputStream(
   fin, s.getLength(;
   
   SectionName sectionName = SectionName.fromString(s.getName());
   if (sectionName == null) {
 throw new IOException("Unrecognized section " + s.getName());
   }
   switch (sectionName) {
   case NS_INFO:
 dumpNameSection(is);
 break;
   case STRING_TABLE:
 loadStringTable(is);
 break;
   case ERASURE_CODING:
 dumpErasureCodingSection(is);
 break;
   case INODE:
 dumpINodeSection(is);
 break;
   case INODE_REFERENCE:
 dumpINodeReferenceSection(is);
 break;
  
   

   default:
 break;
   }
 }
 out.print("\n");
   }
   ```
   
   Note the default clause, where it does nothing if it encounters a section 
name it does not expect.
   
   I tested running the other processors, File Distribution, DetectCorruption 
and Web and they all worked with no issues.
   
   Two future improvements we could do in a new Jiras, are:
   
   1. Make the ReverseXML processor write out the sub-section headers so it 
creates a parallel enabled image (if the relevant settings are enabled)
   
   2. Investigate allowing OIV to process the image in parallel if it has the 
sub-sections in the index and parallel is enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-14 Thread GitBox
bshashikant commented on a change in pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313780023
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -265,6 +269,13 @@ public void persistContainerSet(OutputStream out) throws 
IOException {
   public long takeSnapshot() throws IOException {
 TermIndex ti = getLastAppliedTermIndex();
 long startTime = Time.monotonicNow();
+if (!isStateMachineHealthy.get()) {
+  String msg =
+  "Failed to take snapshot " + " for " + gid + " as the stateMachine"
+  + " is unhealthy. The last applied index is at " + ti;
 
 Review comment:
   Addressed in the latest patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-14 Thread GitBox
bshashikant commented on a change in pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313780014
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -674,30 +681,60 @@ public void notifyIndexUpdate(long term, long index) {
   if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) {
 builder.setCreateContainerSet(createContainerSet);
   }
+  CompletableFuture applyTransactionFuture =
+  new CompletableFuture<>();
   // Ensure the command gets executed in a separate thread than
   // stateMachineUpdater thread which is calling applyTransaction here.
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(requestProto, builder.build()),
+  CompletableFuture future =
+  CompletableFuture.supplyAsync(
+  () -> runCommand(requestProto, builder.build()),
   getCommandExecutor(requestProto));
-
-  future.thenAccept(m -> {
+  future.thenApply(r -> {
 if (trx.getServerRole() == RaftPeerRole.LEADER) {
   long startTime = (long) trx.getStateMachineContext();
   metrics.incPipelineLatency(cmdType,
   Time.monotonicNowNanos() - startTime);
 }
-
-final Long previous =
-applyTransactionCompletionMap
-.put(index, trx.getLogEntry().getTerm());
-Preconditions.checkState(previous == null);
-if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) {
-  metrics.incNumBytesCommittedCount(
+if (r.getResult() != ContainerProtos.Result.SUCCESS) {
+  StorageContainerException sce =
+  new StorageContainerException(r.getMessage(), r.getResult());
+  LOG.error(
+  "gid {} : ApplyTransaction failed. cmd {} logIndex {} msg : "
+  + "{} Container Result: {}", gid, r.getCmdType(), index,
+  r.getMessage(), r.getResult());
+  metrics.incNumApplyTransactionsFails();
+  ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole());
+  // Since the applyTransaction now is completed exceptionally,
+  // before any further snapshot is taken , the exception will be
+  // caught in stateMachineUpdater in Ratis and ratis server will
+  // shutdown.
+  applyTransactionFuture.completeExceptionally(sce);
 
 Review comment:
   Addressed in the latest patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16500) S3ADelegationTokens to only log at debug on startup

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907050#comment-16907050
 ] 

Hudson commented on HADOOP-16500:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17117 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17117/])
HADOOP-16500 S3ADelegationTokens to only log at debug on startup (gabor.bota: 
rev 0e4b757955ae8da1651b870c12458e3344c0b613)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java


> S3ADelegationTokens to only log at debug on startup
> ---
>
> Key: HADOOP-16500
> URL: https://issues.apache.org/jira/browse/HADOOP-16500
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> downgrade the log at info to log at debug when S3A comes up with DT support. 
> Otherwise it's too noisy.
> Things still get printed when tokens are created.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-14 Thread GitBox
mukul1987 commented on a change in pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313772527
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -674,30 +681,60 @@ public void notifyIndexUpdate(long term, long index) {
   if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) {
 builder.setCreateContainerSet(createContainerSet);
   }
+  CompletableFuture applyTransactionFuture =
+  new CompletableFuture<>();
   // Ensure the command gets executed in a separate thread than
   // stateMachineUpdater thread which is calling applyTransaction here.
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(requestProto, builder.build()),
+  CompletableFuture future =
+  CompletableFuture.supplyAsync(
+  () -> runCommand(requestProto, builder.build()),
   getCommandExecutor(requestProto));
-
-  future.thenAccept(m -> {
+  future.thenApply(r -> {
 if (trx.getServerRole() == RaftPeerRole.LEADER) {
   long startTime = (long) trx.getStateMachineContext();
   metrics.incPipelineLatency(cmdType,
   Time.monotonicNowNanos() - startTime);
 }
-
-final Long previous =
-applyTransactionCompletionMap
-.put(index, trx.getLogEntry().getTerm());
-Preconditions.checkState(previous == null);
-if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) {
-  metrics.incNumBytesCommittedCount(
+if (r.getResult() != ContainerProtos.Result.SUCCESS) {
+  StorageContainerException sce =
+  new StorageContainerException(r.getMessage(), r.getResult());
+  LOG.error(
+  "gid {} : ApplyTransaction failed. cmd {} logIndex {} msg : "
+  + "{} Container Result: {}", gid, r.getCmdType(), index,
+  r.getMessage(), r.getResult());
+  metrics.incNumApplyTransactionsFails();
+  ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole());
+  // Since the applyTransaction now is completed exceptionally,
+  // before any further snapshot is taken , the exception will be
+  // caught in stateMachineUpdater in Ratis and ratis server will
+  // shutdown.
+  applyTransactionFuture.completeExceptionally(sce);
 
 Review comment:
   lets move the ratisServer.handleApplyTransactionFailure(gid, 
trx.getServerRole()); as the last line in the if block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-14 Thread GitBox
mukul1987 commented on a change in pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313771945
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -265,6 +269,13 @@ public void persistContainerSet(OutputStream out) throws 
IOException {
   public long takeSnapshot() throws IOException {
 TermIndex ti = getLastAppliedTermIndex();
 long startTime = Time.monotonicNow();
+if (!isStateMachineHealthy.get()) {
+  String msg =
+  "Failed to take snapshot " + " for " + gid + " as the stateMachine"
+  + " is unhealthy. The last applied index is at " + ti;
 
 Review comment:
   lets log this as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16500) S3ADelegationTokens to only log at debug on startup

2019-08-14 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16500.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

Committed to trunk.

> S3ADelegationTokens to only log at debug on startup
> ---
>
> Key: HADOOP-16500
> URL: https://issues.apache.org/jira/browse/HADOOP-16500
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> downgrade the log at info to log at debug when S3A comes up with DT support. 
> Otherwise it's too noisy.
> Things still get printed when tokens are created.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907036#comment-16907036
 ] 

Hadoop QA commented on HADOOP-16391:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 17 unchanged - 0 fixed = 19 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977557/HADOOP-16391-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 795de952d7b5 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 846848a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16478/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16478/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16478/testReport/ |
| Max. process+thread count | 1480 (vs. ulimit of 5500) |
| modules | C: 

[GitHub] [hadoop] bgaborg merged pull request #1269: HADOOP-16500 S3ADelegationTokens to only log at debug on startup

2019-08-14 Thread GitBox
bgaborg merged pull request #1269: HADOOP-16500 S3ADelegationTokens to only log 
at debug on startup
URL: https://github.com/apache/hadoop/pull/1269
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16500) S3ADelegationTokens to only log at debug on startup

2019-08-14 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907034#comment-16907034
 ] 

Gabor Bota commented on HADOOP-16500:
-

+1 for GitHub Pull Request #1269. Committing this.

> S3ADelegationTokens to only log at debug on startup
> ---
>
> Key: HADOOP-16500
> URL: https://issues.apache.org/jira/browse/HADOOP-16500
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> downgrade the log at info to log at debug when S3A comes up with DT support. 
> Otherwise it's too noisy.
> Things still get printed when tokens are created.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-08-14 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-15565:
-
Attachment: HADOOP-15565.0006.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-08-14 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906972#comment-16906972
 ] 

Jinglun commented on HADOOP-15565:
--

Thanks [~xkrogen] for your nice comments.
{quote}I don't think publicly exposing cacheSize() on FileSystem is a great 
idea. Can we make it package-private, and if it is needed in non-package-local 
tests, use a test utility to export it publicly?
{quote}
Reasonable! I don't want to start a new file so I'll add the test utility to 
TestFileUtil.java.
{quote}Is there a chance the cache will be accessed in a multi-threaded way? If 
so we need to harden it for concurrent access. Looks like it will only work in 
a single-threaded fashion currently. If the FS instances are actually all 
created on startup, then I think we should explicitly populate the cache on 
startup.
{quote}
The fs instances are all created on startup, I'll make the cache unmodifiable 
so we know it will only be created on startup and won't be modified anymore.
{quote}I agree that swallowing exceptions on child FS close is the right move, 
but probably we should at least put them at INFO level?
{quote}
Right! I'll change it.
{quote}This seems less strict than FileSystem.CACHE when checking for equality; 
it doesn't use the UserGroupInformation at all. I think this is safe because 
the cache is local to a single ViewFileSystem, so all of the inner cached 
instances must share the same UGI, but please help me to confirm.
{quote}
Yes, it's safe. As all the instances share the same UGI we can make the Key 
simple.
{quote}We can use Objects.hash() for the hashCode() method of Key.
{quote}
Right! That's a good practice! I'll update it.
{quote}On ViewFileSystem L257, you shouldn't initialize fs – you can just 
declare it: FileSystem fs; (this allows the compiler to help ensure that you 
remember to initialize it later)
{quote}
Right! I'll update it.

 

Upload patch-006.

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1250: HDDS-1929. OM started on recon host in ozonesecure compose

2019-08-14 Thread GitBox
adoroszlai commented on issue #1250: HDDS-1929. OM started on recon host in 
ozonesecure compose
URL: https://github.com/apache/hadoop/pull/1250#issuecomment-521129973
 
 
   Thanks @anuengineer for committing it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1291: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-14 Thread GitBox
smengcl opened a new pull request #1291: HDFS-14665. HttpFS: LISTSTATUS 
response is missing HDFS-specific fields
URL: https://github.com/apache/hadoop/pull/1291
 
 
   Rebased on branch-3.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1232: HDDS-1914. Ozonescript example docker-compose cluster can't be started

2019-08-14 Thread GitBox
anuengineer closed pull request #1232: HDDS-1914. Ozonescript example 
docker-compose cluster can't be started
URL: https://github.com/apache/hadoop/pull/1232
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1232: HDDS-1914. Ozonescript example docker-compose cluster can't be started

2019-08-14 Thread GitBox
anuengineer commented on issue #1232: HDDS-1914. Ozonescript example 
docker-compose cluster can't be started
URL: https://github.com/apache/hadoop/pull/1232#issuecomment-521122020
 
 
   Awesome works well. I have also independently tested it on my setup. 
@adoroszlai  Thanks for the review and confirmation. I will commit this soon. 
   
   Just a minor detail that I noticed.. 
   `+ docker-compose exec scm /opt/hadoop/sbin/start-ozone.sh
   Starting datanodes
   82245dc7cd2f: Warning: Permanently added '82245dc7cd2f,172.20.0.4' (ECDSA) 
to the list of known hosts.
   82245dc7cd2f: datanode is running as process 47.  Stop it first.
   Starting Ozone Manager nodes [om]
   om: Warning: Permanently added 'om,172.20.0.2' (ECDSA) to the list of known 
hosts.
   Starting storage container manager nodes [scm]
   scm: Warning: Permanently added 'scm,172.20.0.3' (ECDSA) to the list of 
known hosts.
   scm: scm is running as process 398.  Stop it first.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1253: HDFS-8631. WebHDFS : Support setQuota

2019-08-14 Thread GitBox
hadoop-yetus commented on issue #1253: HDFS-8631. WebHDFS : Support setQuota
URL: https://github.com/apache/hadoop/pull/1253#issuecomment-521120553
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 15 | HDFS-8631 does not apply to trunk. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | JIRA Issue | HDFS-8631 |
   | JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860576/HDFS-8631-006.patch |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1253/8/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-08-14 Thread Bilwa S T (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906927#comment-16906927
 ] 

Bilwa S T commented on HADOOP-16391:


Thanks  [~vinayakumarb] [~xkrogen] for reviewing patch. I have updated patch. 
Please review

> Duplicate values in rpcDetailedMetrics
> --
>
> Key: HADOOP-16391
> URL: https://issues.apache.org/jira/browse/HADOOP-16391
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-16391-001.patch, HADOOP-16391-002.patch, 
> image-2019-06-25-20-30-15-395.png, screenshot-1.png, screenshot-2.png
>
>
> In RpcDetailedMetrics init is called two times . Once for deferredRpcrates 
> and other one rates metrics which causes duplicate values in RM and NM 
> metrics.
>  !image-2019-06-25-20-30-15-395.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1250: HDDS-1929. OM started on recon host in ozonesecure compose

2019-08-14 Thread GitBox
anuengineer commented on issue #1250: HDDS-1929. OM started on recon host in 
ozonesecure compose
URL: https://github.com/apache/hadoop/pull/1250#issuecomment-521117447
 
 
   @adoroszlai Thanks for fixing this issue. @vivekratnavel  and @xiaoyuyao  
Thanks for the reviews. I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1250: HDDS-1929. OM started on recon host in ozonesecure compose

2019-08-14 Thread GitBox
anuengineer closed pull request #1250: HDDS-1929. OM started on recon host in 
ozonesecure compose
URL: https://github.com/apache/hadoop/pull/1250
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-08-14 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated HADOOP-16391:
---
Attachment: HADOOP-16391-002.patch

> Duplicate values in rpcDetailedMetrics
> --
>
> Key: HADOOP-16391
> URL: https://issues.apache.org/jira/browse/HADOOP-16391
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-16391-001.patch, HADOOP-16391-002.patch, 
> image-2019-06-25-20-30-15-395.png, screenshot-1.png, screenshot-2.png
>
>
> In RpcDetailedMetrics init is called two times . Once for deferredRpcrates 
> and other one rates metrics which causes duplicate values in RM and NM 
> metrics.
>  !image-2019-06-25-20-30-15-395.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org