[jira] [Updated] (HDFS-14008) NN should log snapshotdiff report

2018-10-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14008:

Status: Patch Available  (was: In Progress)

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.3, 3.1.1, 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch, HDFS-14008.002.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14008) NN should log snapshotdiff report

2018-10-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14008:

Attachment: HDFS-14008.002.patch

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 3.1.1, 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch, HDFS-14008.002.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14008) NN should log snapshotdiff report

2018-10-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14008:

Status: In Progress  (was: Patch Available)

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.3, 3.1.1, 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14037) Fix SSLFactory truststore reloader thread leak in URLConnectionFactory

2018-10-29 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14037:
---

 Summary: Fix SSLFactory truststore reloader thread leak in 
URLConnectionFactory
 Key: HDFS-14037
 URL: https://issues.apache.org/jira/browse/HDFS-14037
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, webhdfs
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


This is reported by [~yoshiata]. It is a similar issue as HADOOP-11368 and 
YARN-5309 in URLConnectionFactory.
{quote}SSLFactory in newSslConnConfigurator and subsequently creates the 
ReloadingX509TrustManager instance which in turn starts a trust store reloader 
thread.
However, the SSLFactory is never destroyed and hence the trust store reloader 
threads are not killed.
{quote}

We observed many leaked threads when we used swebhdfs via NiFi cluster.
{noformat}
"Truststore reloader thread" Id=221 TIMED_WAITING  on null
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-29 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14036:
---
Attachment: HDFS-14036.001.patch

> RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default
> -
>
> Key: HDFS-14036
> URL: https://issues.apache.org/jira/browse/HDFS-14036
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14036.001.patch
>
>
> Currently, the default values from hdfs-rbf-default.xml are not been set by 
> default.
> We should add them to HdfsConfiguration by default.
> This may break some unit tests so we would need to tune some RBF unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-29 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668111#comment-16668111
 ] 

CR Hota commented on HDFS-14036:


[~elgoiri]

My local set-up unit tests don't work well. Will rely on yetus to help review 
patches for tests.

> RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default
> -
>
> Key: HDFS-14036
> URL: https://issues.apache.org/jira/browse/HDFS-14036
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>
> Currently, the default values from hdfs-rbf-default.xml are not been set by 
> default.
> We should add them to HdfsConfiguration by default.
> This may break some unit tests so we would need to tune some RBF unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668082#comment-16668082
 ] 

Hadoop QA commented on HDDS-580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
11s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
50s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 21s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 46s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HDDS-749) Restructure BlockId class in Ozone

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668080#comment-16668080
 ] 

Hadoop QA commented on HDDS-749:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  

[jira] [Comment Edited] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668076#comment-16668076
 ] 

Yiqun Lin edited comment on HDDS-759 at 10/30/18 3:43 AM:
--

+1 for the idea, [~arpitagarwal]. The patch almost looks good to me. Some minor 
comments from me:

*ServerUtils#getScmDbDir*
{noformat}
LOG.warn("{} is not configured. We recommend adding this setting. " + 
+"Falling back to {} instead.",
+HddsConfigKeys.OZONE_METADATA_DIRS, ScmConfigKeys.OZONE_SCM_DB_DIRS);
{noformat}
The parameters should be {{OZONE_SCM_DB_DIRS}} first, then 
{{OZONE_METADATA_DIRS}}.
 *The same problem in {{OmUtils#getOmDbDir}}*, you can take a look.

*OmUtils.java*
 Logger instance get incorrectly in this class.

 
 *Some common problems for UTs(TestHddsServerUtils and TestOmUtils )*:
 * We should delete meta dir after each test.
 * It would be better to add a corner case: no any meta dir (include om/scm 
dir) is configured and to verify the thrown exception.
 * The position of the input value in \{{assertEquals(...getOm/ScmDbDir(conf), 
metaDir);}} seems incorrect, metaDir seems more to be the expected value and 
getOm/ScmDbDir is actual value.


was (Author: linyiqun):
+1 for the idea, [~arpitagarwal]. The patch almost looks good to me. Some minor 
comments from me:

*ServerUtils#getScmDbDir*
{noformat}
LOG.warn("{} is not configured. We recommend adding this setting. " + 
+"Falling back to {} instead.",
+HddsConfigKeys.OZONE_METADATA_DIRS, ScmConfigKeys.OZONE_SCM_DB_DIRS);
{noformat}
The parameters should be {{OZONE_SCM_DB_DIRS}} first, then 
{{OZONE_METADATA_DIRS}}.
 *The same problem in {{OmUtils#getOmDbDir}}*, you can take a look.

*OmUtils.java*
 Logger instance get incorrectly in this class.

 
 *Some common problems for UTs(TestHddsServerUtils and TestOmUtils )*:
 * We should delete meta dir after each test.
 * It would be better to add a corner case: no any meta dir (include om/scm 
dir) is configured and to verify the thrown exception.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668076#comment-16668076
 ] 

Yiqun Lin commented on HDDS-759:


+1 for the idea, [~arpitagarwal]. The patch almost looks good to me. Some minor 
comments from me:

*ServerUtils#getScmDbDir*
{noformat}
LOG.warn("{} is not configured. We recommend adding this setting. " + 
+"Falling back to {} instead.",
+HddsConfigKeys.OZONE_METADATA_DIRS, ScmConfigKeys.OZONE_SCM_DB_DIRS);
{noformat}
The parameters should be {{OZONE_SCM_DB_DIRS}} first, then 
{{OZONE_METADATA_DIRS}}.
 *The same problem in {{OmUtils#getOmDbDir}}*, you can take a look.

*OmUtils.java*
 Logger instance get incorrectly in this class.

 
 *Some common problems for UTs(TestHddsServerUtils and TestOmUtils )*:
 * We should delete meta dir after each test.
 * It would be better to add a corner case: no any meta dir (include om/scm 
dir) is configured and to verify the thrown exception.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668070#comment-16668070
 ] 

Hudson commented on HDFS-14027:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15337 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15337/])
HDFS-14027. DFSStripedOutputStream should implement both hsync methods. (xiao: 
rev db7e636824a36b90ba1c8e9b2fba1162771700fe)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch, 
> HDFS-14027.03.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at 

[jira] [Updated] (HDDS-761) Create S3 subcommand to run S3 related operations

2018-10-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-761:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> Create S3 subcommand to run S3 related operations
> -
>
> Key: HDDS-761
> URL: https://issues.apache.org/jira/browse/HDDS-761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is added to create S3 subcommand, which will be used for all S3 
> related operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-761) Create S3 subcommand to run S3 related operations

2018-10-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-761:
---

 Summary: Create S3 subcommand to run S3 related operations
 Key: HDDS-761
 URL: https://issues.apache.org/jira/browse/HDDS-761
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is added to create S3 subcommand, which will be used for all S3 
related operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668066#comment-16668066
 ] 

Akira Ajisaka commented on HDDS-760:


+1, thanks [~ajayydv].

> Add asf license to TestCertificateSignRequest
> -
>
> Key: HDDS-760
> URL: https://issues.apache.org/jira/browse/HDDS-760
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-760-HDDS-4.00.patch
>
>
> {code}Lines that start with ? in the ASF License  report indicate files 
> that do not have an Apache license header:
>  !? 
> /testptch/hadoop/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667985#comment-16667985
 ] 

Bharat Viswanadham edited comment on HDDS-659 at 10/30/18 3:09 AM:
---

Modified the code, added test cases for continuation-token.

Patch is dependant on HDDS-659

 


was (Author: bharatviswa):
Modified the code, added test cases for continuation-token.

 

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-751) Replace usage of Guava Optional with Java Optional

2018-10-29 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDDS-751:
--

Assignee: Yiqun Lin

> Replace usage of Guava Optional with Java Optional
> --
>
> Key: HDDS-751
> URL: https://issues.apache.org/jira/browse/HDDS-751
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Critical
>  Labels: newbie
>
> Ozone and HDDS code uses {{com.google.common.base.Optional}} in multiple 
> places.
> Let's replace with the Java Optional since we only target JDK8+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-742:

Comment: was deleted

(was: Handled this issue also in HDDS-659 (as to make continuation-token work 
we need complete changes), but HDDS-659 should be applied on top of this 
change. 

 )

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668050#comment-16668050
 ] 

Hadoop QA commented on HDDS-659:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDDS-659 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-659 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946147/HDDS-659.02.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1556/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668029#comment-16668029
 ] 

Hadoop QA commented on HDDS-754:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946133/HDDS-754.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 239093c5ebf3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 496f0ff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1555/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1555/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeInfo#getScmUsed throws NPE
> 

[jira] [Commented] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668026#comment-16668026
 ] 

Hadoop QA commented on HDDS-760:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
33s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-760 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946139/HDDS-760-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cdf2a78bc899 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 9c79c55 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1553/testReport/ |
| Max. process+thread count | 309 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1553/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add asf license to TestCertificateSignRequest
> -
>
> Key: HDDS-760
> URL: https://issues.apache.org/jira/browse/HDDS-760
> 

[jira] [Commented] (HDDS-749) Restructure BlockId class in Ozone

2018-10-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668014#comment-16668014
 ] 

Jitendra Nath Pandey commented on HDDS-749:
---

In {{ChunkOutputStream}} instead of explicitly setting BCS, please update 
{{BlockID#getFromProtobuf}} method to set it up completely.

> Restructure BlockId class in Ozone
> --
>
> Key: HDDS-749
> URL: https://issues.apache.org/jira/browse/HDDS-749
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-749.000.patch, HDDS-749.001.patch
>
>
> As a part of block allocation in SCM, SCM will return a containerBlockId 
> which constitutes of containerId and localId. Once OM gets the allocated 
> Blocks from SCM, it will create a BlockId object which constitutes of 
> containerID , localId and BlockCommitSequenceId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-29 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14027:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk through branch-3.0. Manually resolved a trivial import 
conflict in the test class in 3.1.

Thanks Imran for investigating from spark, and Daniel for the review!

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch, 
> HDFS-14027.03.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> 

[jira] [Updated] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-29 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14027:
-
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   3.0.4

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch, 
> HDFS-14027.03.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> 

[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: HDDS-759.01.patch

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: (was: HDDS-759.01.patch)

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: HDDS-759.01.patch

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: (was: HDDS-759.01.patch)

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Comment: was deleted

(was: v02 patch:
- Adds targeted unit tests.
- Improves log messages.)

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: (was: HDDS-759.02.patch)

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667988#comment-16667988
 ] 

Hadoop QA commented on HDDS-754:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946133/HDDS-754.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b2ed036b164b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 496f0ff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1551/testReport/ |
| Max. process+thread count | 308 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1551/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeInfo#getScmUsed throws NPE
> 

[jira] [Commented] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667986#comment-16667986
 ] 

Bharat Viswanadham commented on HDDS-742:
-

Handled this issue also in HDDS-659 (as to make continuation-token work we need 
complete changes), but HDDS-659 should be applied on top of this change. 

 

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667985#comment-16667985
 ] 

Bharat Viswanadham commented on HDDS-659:
-

Modified the code, added test cases for continuation-token.

 

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-659:

Attachment: HDDS-659.02.patch

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667978#comment-16667978
 ] 

Hadoop QA commented on HDDS-742:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-ozone/s3gateway: The patch generated 1 
new + 1 unchanged - 1 fixed = 2 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946127/HDDS-742.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 480d3d0db043 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 496f0ff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1550/artifact/out/diff-checkstyle-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1550/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1550/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HDFS-13719) Docs around dfs.image.transfer.timeout are misleading

2018-10-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13719:
-
   Labels:   (was: hdfs)
Fix Version/s: 2.9.2

Cherry-picked this to branch-2.9.

> Docs around dfs.image.transfer.timeout are misleading
> -
>
> Key: HDFS-13719
> URL: https://issues.apache.org/jira/browse/HDFS-13719
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13719-branch-2.001.patch, HDFS-13719.001.patch, 
> HDFS-13719.002.patch
>
>
> The Jira https://issues.apache.org/jira/browse/HDFS-1490 added the parameter 
> dfs.image.transfer.timeout to HDFS. From the patch (and checking the current 
> code), we can see this parameter governs a socket timeout on the a 
> java.net.HttpURLConnection object:
> {code:java}
> +if (timeout <= 0) {
> +  // Set the ping interval as timeout
> +  Configuration conf = new HdfsConfiguration();
> +  timeout = conf.getInt(DFSConfigKeys.DFS_IMAGE_TRANSFER_TIMEOUT_KEY,
> +  DFSConfigKeys.DFS_IMAGE_TRANSFER_TIMEOUT_DEFAULT);
> +}
> +
> +if (timeout > 0) {
> +  connection.setConnectTimeout(timeout);
> +  connection.setReadTimeout(timeout);
> +}
> +
> {code}
> In the above 'connection' is a java.net.HttpURLConnection.
> There is a general disbelief in the community that dfs.image.transfer.timeout 
> is the time the entire image must transfer within, however that does not 
> appear to be the case. The timeout is actually the max time the client will 
> block on the socket before giving up if it cannot get data to read. I guess 
> the idea here is to protect the client from hanging forever if the server 
> hangs.
> The docs in hdfs-site.xml are partly what causes this confusion, as they are 
> very misleading:
> {code:xml}
> 
>   dfs.image.transfer.timeout
>   6
>   
> Socket timeout for image transfer in milliseconds. This timeout and 
> the related
> dfs.image.transfer.bandwidthPerSec parameter should be configured such
> that normal image transfer can complete successfully.
> This timeout prevents client hangs when the sender fails during
> image transfer. This is socket timeout during image tranfer.
>   
> 
> {code}
> The start and end of the statement is accurate, but the part "This timeout 
> and the related dfs.image.transfer.bandwidthPerSec parameter should be 
> configured such that normal image transfer can complete successfully." is 
> misleading. There is almost never a reason to change the above in conjunction 
> with the bandwidth setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13719) Docs around dfs.image.transfer.timeout are misleading

2018-10-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13719:
-
Component/s: documentation

> Docs around dfs.image.transfer.timeout are misleading
> -
>
> Key: HDFS-13719
> URL: https://issues.apache.org/jira/browse/HDFS-13719
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: hdfs
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13719-branch-2.001.patch, HDFS-13719.001.patch, 
> HDFS-13719.002.patch
>
>
> The Jira https://issues.apache.org/jira/browse/HDFS-1490 added the parameter 
> dfs.image.transfer.timeout to HDFS. From the patch (and checking the current 
> code), we can see this parameter governs a socket timeout on the a 
> java.net.HttpURLConnection object:
> {code:java}
> +if (timeout <= 0) {
> +  // Set the ping interval as timeout
> +  Configuration conf = new HdfsConfiguration();
> +  timeout = conf.getInt(DFSConfigKeys.DFS_IMAGE_TRANSFER_TIMEOUT_KEY,
> +  DFSConfigKeys.DFS_IMAGE_TRANSFER_TIMEOUT_DEFAULT);
> +}
> +
> +if (timeout > 0) {
> +  connection.setConnectTimeout(timeout);
> +  connection.setReadTimeout(timeout);
> +}
> +
> {code}
> In the above 'connection' is a java.net.HttpURLConnection.
> There is a general disbelief in the community that dfs.image.transfer.timeout 
> is the time the entire image must transfer within, however that does not 
> appear to be the case. The timeout is actually the max time the client will 
> block on the socket before giving up if it cannot get data to read. I guess 
> the idea here is to protect the client from hanging forever if the server 
> hangs.
> The docs in hdfs-site.xml are partly what causes this confusion, as they are 
> very misleading:
> {code:xml}
> 
>   dfs.image.transfer.timeout
>   6
>   
> Socket timeout for image transfer in milliseconds. This timeout and 
> the related
> dfs.image.transfer.bandwidthPerSec parameter should be configured such
> that normal image transfer can complete successfully.
> This timeout prevents client hangs when the sender fails during
> image transfer. This is socket timeout during image tranfer.
>   
> 
> {code}
> The start and end of the statement is accurate, but the part "This timeout 
> and the related dfs.image.transfer.bandwidthPerSec parameter should be 
> configured such that normal image transfer can complete successfully." is 
> misleading. There is almost never a reason to change the above in conjunction 
> with the bandwidth setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667971#comment-16667971
 ] 

Hadoop QA commented on HDDS-753:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
42s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
17s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 30s{color} | {color:orange} root: The patch generated 2 new + 7 unchanged - 
0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-753 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946113/HDDS-753-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667961#comment-16667961
 ] 

Arpit Agarwal commented on HDDS-759:


v02 patch:
- Adds targeted unit tests.
- Improves log messages.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch, HDDS-759.02.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: HDDS-759.02.patch

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch, HDDS-759.02.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-620) ozone.scm.client.address should be an optional setting

2018-10-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667959#comment-16667959
 ] 

Hudson commented on HDDS-620:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15336 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15336/])
HDDS-620. ozone.scm.client.address should be an optional setting. (arp: rev 
496f0ffe9017b11d0d7c071bad259d132687c656)
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestHddsServerUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
* (edit) 
hadoop-ozone/client/src/test/java/org/apache/hadoop/ozone/client/TestHddsClientUtils.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java


> ozone.scm.client.address should be an optional setting
> --
>
> Key: HDDS-620
> URL: https://issues.apache.org/jira/browse/HDDS-620
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-620.001.patch, HDDS-620.002.patch, 
> HDDS-620.003.patch
>
>
> {{ozone.scm.client.address}} should be an optional setting. Clients can 
> fallback to {{ozone.scm.names}} if the former is unspecified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667953#comment-16667953
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
50s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946126/HDFS-14035-HDFS-12943.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ea666b9e9d70 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / ddca0cf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25389/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25389/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: 

[jira] [Comment Edited] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667938#comment-16667938
 ] 

Ajay Kumar edited comment on HDDS-580 at 10/30/18 12:55 AM:


[~anu] thanks for offline discussion. Patch v11 moves component related code to 
HddsKeyHandler. Also cleaned code related to keypair rotation.


was (Author: ajayydv):
[~anu] thanks for offline discussion. Patch v11 moves component related code to 
HddsKeyHandler.

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch, 
> HDDS-580-HDDS-4.04.patch, HDDS-580-HDDS-4.05.patch, HDDS-580-HDDS-4.06.patch, 
> HDDS-580-HDDS-4.07.patch, HDDS-580-HDDS-4.08.patch, HDDS-580-HDDS-4.09.patch, 
> HDDS-580-HDDS-4.10.patch, HDDS-580-HDDS-4.11.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Attachment: (was: HDDS-754.003.patch)

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Attachment: HDDS-754.003.patch

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-760:

Description: 
{code}Lines that start with ? in the ASF License  report indicate files 
that do not have an Apache license header:
 !? 
/testptch/hadoop/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java{code}

> Add asf license to TestCertificateSignRequest
> -
>
> Key: HDDS-760
> URL: https://issues.apache.org/jira/browse/HDDS-760
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-760-HDDS-4.00.patch
>
>
> {code}Lines that start with ? in the ASF License  report indicate files 
> that do not have an Apache license header:
>  !? 
> /testptch/hadoop/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-760:

Status: Patch Available  (was: Open)

> Add asf license to TestCertificateSignRequest
> -
>
> Key: HDDS-760
> URL: https://issues.apache.org/jira/browse/HDDS-760
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-760-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-760:

Attachment: HDDS-760-HDDS-4.00.patch

> Add asf license to TestCertificateSignRequest
> -
>
> Key: HDDS-760
> URL: https://issues.apache.org/jira/browse/HDDS-760
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-760-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-760) Add asf license to TestCertificateSignRequest

2018-10-29 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-760:
---

 Summary: Add asf license to TestCertificateSignRequest
 Key: HDDS-760
 URL: https://issues.apache.org/jira/browse/HDDS-760
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667938#comment-16667938
 ] 

Ajay Kumar commented on HDDS-580:
-

[~anu] thanks for offline discussion. Patch v11 moves component related code to 
HddsKeyHandler.

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch, 
> HDDS-580-HDDS-4.04.patch, HDDS-580-HDDS-4.05.patch, HDDS-580-HDDS-4.06.patch, 
> HDDS-580-HDDS-4.07.patch, HDDS-580-HDDS-4.08.patch, HDDS-580-HDDS-4.09.patch, 
> HDDS-580-HDDS-4.10.patch, HDDS-580-HDDS-4.11.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Description: 
Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata. 

Looking more closely, it appears that SCM and OM have no option to choose 
separate locations. We should provide custom config settings. For most 
production clusters, admins will want to carefully choose where they place OM 
and SCM metadata, similar to how they choose locations for NN metadata.

To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.

  was:
Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.

Looking more closely, it appears that SCM and OM have no option to choose 
separate locations. We should provide custom config settings.

To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.


> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667927#comment-16667927
 ] 

Arpit Agarwal edited comment on HDDS-759 at 10/30/18 12:33 AM:
---

I strongly dislike adding more config settings but in this case it is necessary 
to allow customization for performance reasons.




was (Author: arpitagarwal):
I strongly dislike adding more config settings but in this case it is necessary 
to allow customization for performance reasons.

For most production clusters, admins will want to carefully choose where they 
place OM and SCM metadata, similar to how they choose locations for NN metadata.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667934#comment-16667934
 ] 

Arpit Agarwal commented on HDDS-759:


Preliminary v01 patch:
- Move OZONE_METADATA_DIRS setting to HddsConfigKeys, since it is also read by 
HDDS services. This triggered cascading changes to numerous unit tests, hence 
the large patch size.
- Added ozone.scm.db.dirs and ozone.om.db.dirs settings for SCM and OM 
respectively. Both fallback to OZONE_METADATA_DIRS.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-580:

Attachment: HDDS-580-HDDS-4.11.patch

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch, 
> HDDS-580-HDDS-4.04.patch, HDDS-580-HDDS-4.05.patch, HDDS-580-HDDS-4.06.patch, 
> HDDS-580-HDDS-4.07.patch, HDDS-580-HDDS-4.08.patch, HDDS-580-HDDS-4.09.patch, 
> HDDS-580-HDDS-4.10.patch, HDDS-580-HDDS-4.11.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-759:
---
Attachment: HDDS-759.01.patch

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667933#comment-16667933
 ] 

Arpit Agarwal commented on HDDS-754:


Thanks [~hanishakoneru]. Can you also add the same check to 
VolumeInfo.getAvailable?

+1 with that added.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667930#comment-16667930
 ] 

Hanisha Koneru commented on HDDS-754:
-

Thanks [~anu].
We might still end up with NullPointerException with patch v01 if datanode 
shuts down in between the "isRunning()" check on state machine and the 
"getReport()" call. Thanks [~arpitagarwal] for pointing this out.

Posted patch v02 to throw an IOException if we call getReport() after the usage 
thread has been shutdown.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-749) Restructure BlockId class in Ozone

2018-10-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667926#comment-16667926
 ] 

Shashikant Banerjee commented on HDDS-749:
--

Patch v1 fixes the related test failure and checkstyle issue.

> Restructure BlockId class in Ozone
> --
>
> Key: HDDS-749
> URL: https://issues.apache.org/jira/browse/HDDS-749
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-749.000.patch, HDDS-749.001.patch
>
>
> As a part of block allocation in SCM, SCM will return a containerBlockId 
> which constitutes of containerId and localId. Once OM gets the allocated 
> Blocks from SCM, it will create a BlockId object which constitutes of 
> containerID , localId and BlockCommitSequenceId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-759:
--

Assignee: Arpit Agarwal

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667927#comment-16667927
 ] 

Arpit Agarwal commented on HDDS-759:


I strongly dislike adding more config settings but in this case it is necessary 
to allow customization for performance reasons.

For most production clusters, admins will want to carefully choose where they 
place OM and SCM metadata, similar to how they choose locations for NN metadata.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-759 started by Arpit Agarwal.
--
> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-749) Restructure BlockId class in Ozone

2018-10-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-749:
-
Attachment: HDDS-749.001.patch

> Restructure BlockId class in Ozone
> --
>
> Key: HDDS-749
> URL: https://issues.apache.org/jira/browse/HDDS-749
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-749.000.patch, HDDS-749.001.patch
>
>
> As a part of block allocation in SCM, SCM will return a containerBlockId 
> which constitutes of containerId and localId. Once OM gets the allocated 
> Blocks from SCM, it will create a BlockId object which constitutes of 
> containerID , localId and BlockCommitSequenceId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Attachment: HDDS-754.002.patch

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-29 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-759:
--

 Summary: Create config settings for SCM and OM DB directories
 Key: HDDS-759
 URL: https://issues.apache.org/jira/browse/HDDS-759
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing metadata.

Looking more closely, it appears that SCM and OM have no option to choose 
separate locations. We should provide custom config settings.

To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-620) ozone.scm.client.address should be an optional setting

2018-10-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-620:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~jnp] for the review. Thanks [~candychencan] for the initial patch.

I've committed this to trunk and ozone-0.3.

> ozone.scm.client.address should be an optional setting
> --
>
> Key: HDDS-620
> URL: https://issues.apache.org/jira/browse/HDDS-620
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-620.001.patch, HDDS-620.002.patch, 
> HDDS-620.003.patch
>
>
> {{ozone.scm.client.address}} should be an optional setting. Clients can 
> fallback to {{ozone.scm.names}} if the former is unspecified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-29 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667921#comment-16667921
 ] 

Daniel Templeton commented on HDFS-14027:
-

LGTM +1

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch, 
> HDFS-14027.03.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 

[jira] [Commented] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-10-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667920#comment-16667920
 ] 

Íñigo Goiri commented on HDFS-13404:


As [~ayushtkn] pointed in HDFS-13964 this is different.
I see it once in a while, not sure what the reason is, the log is not very 
descriptive:
https://builds.apache.org/job/PreCommit-HDFS-Build/25388/testReport/org.apache.hadoop.fs.contract.router.web/TestRouterWebHDFSContractAppend/testRenameFileBeingAppended/

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667919#comment-16667919
 ] 

Íñigo Goiri commented on HDFS-12284:


{{TestRouterWebHDFSContractAppend}} runs locally for me.
I think this is one spurious test we have as reported in HDFS-13404.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, 
> HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, 
> HDFS-12284-HDFS-13532.013.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667910#comment-16667910
 ] 

Anu Engineer commented on HDDS-754:
---

Thanks for fixing this issue. +1, pending Jenkins.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Status: Patch Available  (was: Open)

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-620) ozone.scm.client.address should be an optional setting

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667892#comment-16667892
 ] 

Hadoop QA commented on HDDS-620:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 10s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 9 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946103/HDDS-620.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e4538f23f278 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 

[jira] [Commented] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-29 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667891#comment-16667891
 ] 

Xiao Chen commented on HDFS-14027:
--

Failed test is HDFS-13975, unrelated to the change here.

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch, 
> HDFS-14027.03.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> 

[jira] [Commented] (HDDS-743) S3 multi delete request should return XML header in quiet mode

2018-10-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667889#comment-16667889
 ] 

Hudson commented on HDDS-743:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15335/])
HDDS-743. S3 multi delete request should return XML header in quiet (bharat: 
rev 3655e573e28eea79e46936d348a852158b2fc48a)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectMultiDelete.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java


> S3 multi delete request should return XML header in quiet mode
> --
>
> Key: HDDS-743
> URL: https://issues.apache.org/jira/browse/HDDS-743
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-743.001.patch
>
>
> Delete multiple objects with sending XML message to the bucket?delete 
> endpoint is implemented in HDDS-701 according to the aws documentation at 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html]
> As the documentations writes:
> {quote}{{ By default, the operation uses verbose mode in which the response 
> includes the result of deletion of each key in your request. In quiet mode 
> the response includes only keys where the delete operation encountered an 
> error}}
> {quote}
> In the quiet mode (which is an XML element in the input body) we return the 
> XML only in case of errors based on this paragraph. Without any error we 
> returned **with *empty body*.
> But during the executions of s3a unit tests I found that the right response 
> is an empty XML document instead of empty body (in case of quiet mode + 
> without any error)
> {code:java}
> 
> http://s3.amazonaws.com/doc/2006-03-01/;>{code}
> Some of the s3a unit tests are failed as without XML response the parsing was 
> unsuccessful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667886#comment-16667886
 ] 

Bharat Viswanadham edited comment on HDDS-742 at 10/29/18 11:38 PM:


Thank You [~elek] for the patch.

 

I have few comments:
 # Why do we need below code?
 # 
{code:java}
if (prefix.length() > 0 && !prefix.endsWith(delimiter)
&& relativeKeyName.length() > 0) { response.addPrefix(prefix + "/"); break; 
}{code}

I have reused your code and just modified instead of using length() function 
multiple times. And also added comments for each and every block, as by reading 
the code it is not clear, why we need each and every condition. (Just this is 
done for readability)

Let me know your thoughts on the code, in latest patch not removed the above 
code, as I was not sure why we need the change?


was (Author: bharatviswa):
Thank You [~elek] for the patch.

 

I have few comments:
 # Why do we need this? 
 # 
{code:java}
if (prefix.length() > 0 && !prefix.endsWith(delimiter)
&& relativeKeyName.length() > 0) { response.addPrefix(prefix + "/"); break; 
}{code}

I have reused your code and just modified instead of using length() function 
multiple times. And also added comments for each and every block, as by reading 
the code it is not clear, why we need each and every condition. (Just this is 
done for readability)

Let me know your thoughts on the code, in latest patch not removed the above 
code, as I was not sure why we need the change?

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667886#comment-16667886
 ] 

Bharat Viswanadham commented on HDDS-742:
-

Thank You [~elek] for the patch.

 

I have few comments:
 # Why do we need this? 
 # if (prefix.length() > 0 && !prefix.endsWith(delimiter)
&& relativeKeyName.length() > 0) {
response.addPrefix(prefix + "/");
break;
}

I have reused your code and just modified instead of using length() function 
multiple times. And also added comments for each and every block, as by reading 
the code it is not clear, why we need each and every condition. (Just this is 
done for readability)

Let me know your thoughts on the code, in latest patch not removed the above 
code, as I was not sure why we need the change?

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667886#comment-16667886
 ] 

Bharat Viswanadham edited comment on HDDS-742 at 10/29/18 11:37 PM:


Thank You [~elek] for the patch.

 

I have few comments:
 # Why do we need this? 
 # 
{code:java}
if (prefix.length() > 0 && !prefix.endsWith(delimiter)
&& relativeKeyName.length() > 0) { response.addPrefix(prefix + "/"); break; 
}{code}

I have reused your code and just modified instead of using length() function 
multiple times. And also added comments for each and every block, as by reading 
the code it is not clear, why we need each and every condition. (Just this is 
done for readability)

Let me know your thoughts on the code, in latest patch not removed the above 
code, as I was not sure why we need the change?


was (Author: bharatviswa):
Thank You [~elek] for the patch.

 

I have few comments:
 # Why do we need this? 
 # if (prefix.length() > 0 && !prefix.endsWith(delimiter)
&& relativeKeyName.length() > 0) {
response.addPrefix(prefix + "/");
break;
}

I have reused your code and just modified instead of using length() function 
multiple times. And also added comments for each and every block, as by reading 
the code it is not clear, why we need each and every condition. (Just this is 
done for readability)

Let me know your thoughts on the code, in latest patch not removed the above 
code, as I was not sure why we need the change?

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-620) ozone.scm.client.address should be an optional setting

2018-10-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667883#comment-16667883
 ] 

Jitendra Nath Pandey commented on HDDS-620:
---

+1 LGTM.

> ozone.scm.client.address should be an optional setting
> --
>
> Key: HDDS-620
> URL: https://issues.apache.org/jira/browse/HDDS-620
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-620.001.patch, HDDS-620.002.patch, 
> HDDS-620.003.patch
>
>
> {{ozone.scm.client.address}} should be an optional setting. Clients can 
> fallback to {{ozone.scm.names}} if the former is unspecified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-742:

Attachment: HDDS-742.02.patch

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-29 Thread Yongjun Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-14015:
-
Comment: was deleted

(was: I don't see  an API provided by jni to get the current thread id, but I 
saw one here: 

[https://stackoverflow.com/questions/11224394/obtaining-the-thread-id-for-java-threads-in-linux]

If it's too much hassle to include in this jira, please feel free to postpone 
that to a new jira.

Thanks.

 

 )

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch, 
> HDFS-14015.006.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.001.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667874#comment-16667874
 ] 

Chen Liang commented on HDFS-14035:
---

Attached the wrong patch file...reattached

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-29 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667873#comment-16667873
 ] 

Yongjun Zhang commented on HDFS-14015:
--

I don't see  an API provided by jni to get the current thread id, but I saw one 
here: 

[https://stackoverflow.com/questions/11224394/obtaining-the-thread-id-for-java-threads-in-linux]

If it's too much hassle to include in this jira, please feel free to postpone 
that to a new jira.

Thanks.

 

 

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch, 
> HDFS-14015.006.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: (was: HDFS-14035-HDFS-12943.001.patch)

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14012) Add diag info in RetryInvocationHandler

2018-10-29 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667870#comment-16667870
 ] 

Yongjun Zhang commented on HDFS-14012:
--

Thanks [~dineshchitlangia], good to see that it's fixed in trunk. I must have 
been on an old branch.

Sorry for getting back late.

 

 

> Add diag info in RetryInvocationHandler
> ---
>
> Key: HDFS-14012
> URL: https://issues.apache.org/jira/browse/HDFS-14012
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> RetryInvocationHandler does the following logging:
> {code:java}
> } else { 
>   LOG.warn("A failover has occurred since the start of this method" + " 
> invocation attempt."); 
> }{code}
> Would be helpful to report the method name, and call stack in this message.
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Status: Patch Available  (was: Open)

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-758) Separate client/server configuration settings

2018-10-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667871#comment-16667871
 ] 

Arpit Agarwal commented on HDDS-758:


Proposed offline by [~jnpandey].

> Separate client/server configuration settings
> -
>
> Key: HDDS-758
> URL: https://issues.apache.org/jira/browse/HDDS-758
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Major
>
> Ozone should have separate config files for client and server. E.g.
> - ozone-site-client.xml
> - ozone-site-server.xml
> Clients should never load ozone-site-server.xml. And vice versa i.e. servers 
> should never load ozone-site-client.xml.
> This may require duplicating a very small number of settings like OM and SCM 
> address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667872#comment-16667872
 ] 

Chen Liang commented on HDFS-14035:
---

Post v001 patch which enables delegation token for HAService protocol. The way 
it works is that, currently in SaslRpcClient#getTokenInfo, it goes through all 
the security info providers and returns the first found non-null token info. 
And the security info providers are specified in 
{{org.apache.hadoop.security.SecurityInfo}} file in META-INF.services directory 
of all packages. v001 patch introduces a new HDFS specific security info 
provider, the only thing it does is returning delegation token selector when it 
is HAService protocol, one good thing with this approach is that configuration 
is being passed around so we can choose to disable this when it is not observer 
read case (yet to be implemented). This is also very similar to how existing 
LocalizerSecurityInfo works. 

Still missing unit test, but have tried with a simple word count job, job 
succeeded with this change.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-758) Separate client/server configuration settings

2018-10-29 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-758:
--

 Summary: Separate client/server configuration settings
 Key: HDDS-758
 URL: https://issues.apache.org/jira/browse/HDDS-758
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Ozone should have separate config files for client and server. E.g.
- ozone-site-client.xml
- ozone-site-server.xml

Clients should never load ozone-site-server.xml. And vice versa i.e. servers 
should never load ozone-site-client.xml.

This may require duplicating a very small number of settings like OM and SCM 
address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-29 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.001.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667857#comment-16667857
 ] 

Hadoop QA commented on HDDS-755:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 20m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 42s{color} | {color:orange} root: The patch generated 1 new + 4 unchanged - 
1 fixed = 5 total (was 5) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 57s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 34s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} integration-test in the patch failed. {color} |
| 

[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667850#comment-16667850
 ] 

Hadoop QA commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13532 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
21s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13532 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 30s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946101/HDFS-12284-HDFS-13532.013.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux bbe6fd853f9a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13532 / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25388/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25388/testReport/ |
| Max. process+thread count | 961 (vs. ulimit of 1) |
| modules 

[jira] [Commented] (HDDS-749) Restructure BlockId class in Ozone

2018-10-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667842#comment-16667842
 ] 

Jitendra Nath Pandey commented on HDDS-749:
---

I think the TestBCSID failure is related to the patch.

> Restructure BlockId class in Ozone
> --
>
> Key: HDDS-749
> URL: https://issues.apache.org/jira/browse/HDDS-749
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-749.000.patch
>
>
> As a part of block allocation in SCM, SCM will return a containerBlockId 
> which constitutes of containerId and localId. Once OM gets the allocated 
> Blocks from SCM, it will create a BlockId object which constitutes of 
> containerID , localId and BlockCommitSequenceId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667836#comment-16667836
 ] 

Hanisha Koneru commented on HDDS-754:
-

This NullPointerException is occurring because the NodeReportPublisher tries 
generating a report after the DN has been shutdown. 
Added a check to NodeReportPublisher to verify that the Datanode State Machine 
is running before getting the node report.

 

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Attachment: HDDS-754.001.patch

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667834#comment-16667834
 ] 

Íñigo Goiri commented on HDFS-14036:


This was introduced when we moved RBF to its own module in HDFS-13215.

> RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default
> -
>
> Key: HDFS-14036
> URL: https://issues.apache.org/jira/browse/HDFS-14036
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>
> Currently, the default values from hdfs-rbf-default.xml are not been set by 
> default.
> We should add them to HdfsConfiguration by default.
> This may break some unit tests so we would need to tune some RBF unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667833#comment-16667833
 ] 

Íñigo Goiri commented on HDFS-12284:


[~crh], I created HDFS-14036 to handle the hdfs-rbf-default.xml issue and 
assigned it to you.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, 
> HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, 
> HDFS-12284-HDFS-13532.013.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-29 Thread JIRA
Íñigo Goiri created HDFS-14036:
--

 Summary: RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by 
default
 Key: HDFS-14036
 URL: https://issues.apache.org/jira/browse/HDFS-14036
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Íñigo Goiri
Assignee: CR Hota


Currently, the default values from hdfs-rbf-default.xml are not been set by 
default.
We should add them to HdfsConfiguration by default.
This may break some unit tests so we would need to tune some RBF unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2018-10-29 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667819#comment-16667819
 ] 

Hrishikesh Gadre commented on HDFS-9243:


[~jojochuang] let me take a look. [~walter.k.su] thanks for initial triage!

> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.
> I suspect it's a deadlock issue, as dead lock had occurred at this test case 
> in HDFS-5527 before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2018-10-29 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre reassigned HDFS-9243:
--

Assignee: Hrishikesh Gadre

> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Wei-Chiu Chuang
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.
> I suspect it's a deadlock issue, as dead lock had occurred at this test case 
> in HDFS-5527 before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667816#comment-16667816
 ] 

Hadoop QA commented on HDDS-580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
17s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 10s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 37s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 53s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
59s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Updated] (HDDS-743) S3 multi delete request should return XML header in quiet mode

2018-10-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-743:

   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

Thank You, [~elek] for fixing the issue.

I have committed it to trunk and ozone-0.3

 

> S3 multi delete request should return XML header in quiet mode
> --
>
> Key: HDDS-743
> URL: https://issues.apache.org/jira/browse/HDDS-743
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-743.001.patch
>
>
> Delete multiple objects with sending XML message to the bucket?delete 
> endpoint is implemented in HDDS-701 according to the aws documentation at 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html]
> As the documentations writes:
> {quote}{{ By default, the operation uses verbose mode in which the response 
> includes the result of deletion of each key in your request. In quiet mode 
> the response includes only keys where the delete operation encountered an 
> error}}
> {quote}
> In the quiet mode (which is an XML element in the input body) we return the 
> XML only in case of errors based on this paragraph. Without any error we 
> returned **with *empty body*.
> But during the executions of s3a unit tests I found that the right response 
> is an empty XML document instead of empty body (in case of quiet mode + 
> without any error)
> {code:java}
> 
> http://s3.amazonaws.com/doc/2006-03-01/;>{code}
> Some of the s3a unit tests are failed as without XML response the parsing was 
> unsuccessful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >