[jira] [Commented] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755812#comment-16755812
 ] 

Sandeep Nemuri commented on HDDS-631:
-

+1 for {{ozone classpath clients}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.

> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1032:

Attachment: HDDS-1032.001.patch

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1032:

Status: Patch Available  (was: Open)

Hi [~msingh],

This was broken by HADOOP-14178.  Since {{hadoop-hdds-\*}} and 
{{hadoop-ozone-\*}} modules are descendants of Hadoop 3.2, not 3.3.0-SNAPSHOT, 
they do not inherit the version for {{mockito-core}} recently added to 
{{hadoop-project}} POM.  The patch simply reverts to using {{mockito-all}}.

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1027) Add blockade Tests for datanode isolation and scm failures

2019-01-30 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-1027:
-
Attachment: HDDS-1027.002.patch

> Add blockade Tests for datanode isolation and scm failures
> --
>
> Key: HDDS-1027
> URL: https://issues.apache.org/jira/browse/HDDS-1027
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1027.001.patch, HDDS-1027.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648161#comment-16648161
 ] 

Nanda kumar edited comment on HDDS-631 at 1/30/19 8:30 AM:
---

Sounds good, then we can have different commands to get the classpath.
{{ozone classpath om}}
{{ozone classpath scm}}
{{ozone classpath client}}

Still, we need to have a proper directory structure for the jars.




was (Author: nandakumar131):
Sounds good, then we can have different commands to get the classpath.
{{ozone classpath om}}
{{ozone classpath scm}}
{{ozone classpath clients}}

Still, we need to have a proper directory structure for the jars.



> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walter updated HDFS-14158:
---
Attachment: (was: HDFS-14158-trunk-002.patch)

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walter updated HDFS-14158:
---
Attachment: (was: HDFS-14158-trunk-003.patch)

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walter updated HDFS-14158:
---
Attachment: HDFS-14158-trunk-002.patch

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-30 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-997:

Attachment: HDDS-997.003.patch

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755964#comment-16755964
 ] 

Hadoop QA commented on HDDS-1032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} root in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
34s{color} | {color:red} root generated 20 new + 0 unchanged - 0 fixed = 20 
total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 28s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
0s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956865/HDDS-1032.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 77d80e29bd72 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / d583cc4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/artifact/out/branch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/artifact/out/branch-javadoc-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/artifact/out/diff-javadoc-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/testReport/ |
| Max. process+thread count | 1114 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/framework hadoop-hdds/server-scm U: hadoop-hdds |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2136/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for 

[jira] [Comment Edited] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755812#comment-16755812
 ] 

Sandeep Nemuri edited comment on HDDS-631 at 1/30/19 9:15 AM:
--

+1 for {{ozone classpath client}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.


was (Author: sandeep nemuri):
+1 for {{ozone classpath clients}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.

> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755890#comment-16755890
 ] 

Elek, Marton commented on HDDS-631:
---

b1. Still, we need to have a proper directory structure for the jars.

I am not sure. The classpath can be read from the .classpath files with the 
shell command even without structure. I can create a poc to show this...

> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755922#comment-16755922
 ] 

Timo Walter commented on HDFS-14158:


Removed latest patch and added it afterwards, to trigger Jenkins.

The following revert 
([https://github.com/apache/hadoop/commit/d1714c20e9309754397588c9b29818b9a74a80d8)]
 should remove the test, that failed during the last builds.

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755921#comment-16755921
 ] 

Yiqun Lin commented on HDDS-1016:
-

The unit test didn't be executed actually. Looks like this was broken by 
HADOOP-14178, the version of Mockito was updated.

Ran the test {{TestKeyValueContainerMarkUnhealthy}} in my local, the error 
threw:
{noformat}
java.lang.NoSuchMethodError: 
org.mockito.internal.matchers.InstanceOf.(Ljava/lang/Class;Ljava/lang/String;)V
at org.mockito.ArgumentMatchers.anyList(ArgumentMatchers.java:478)
at 
org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainerMarkUnhealthy.setUp(TestKeyValueContainerMarkUnhealthy.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755921#comment-16755921
 ] 

Yiqun Lin edited comment on HDDS-1016 at 1/30/19 9:57 AM:
--

The unit test didn't be executed actually. Looks like this was broken by 
HADOOP-14178, the version of Mockito was updated.

Ran the test {{TestKeyValueContainerMarkUnhealthy}} in my local, the error 
threw:
{noformat}
java.lang.NoSuchMethodError: 
org.mockito.internal.matchers.InstanceOf.(Ljava/lang/Class;Ljava/lang/String;)V
at org.mockito.ArgumentMatchers.anyList(ArgumentMatchers.java:478)
at 
org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainerMarkUnhealthy.setUp(TestKeyValueContainerMarkUnhealthy.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}
Found some nitpicks, can be fixed while committing.
 Two whitaspaces:
{noformat}
./hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMarkUnhealthy.java:107:
  }  
./hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMarkUnhealthy.java:171:
  }  
{noformat}
Two redundant change can be reverted:
 *KeyValueContainer.java*
{code:java}
@@ -372,7 +412,7 @@ public void update(Map metadata, boolean 
forceUpdate)
   containerData.setMetadata(oldMetadata);
   throw ex;
 } finally {
-  writeUnlock();
+writeUnlock();
 }
{code}
*KeyValueHandler.java*
{code:java}
@@ -781,7 +855,7 @@ private void checkContainerOpen(KeyValueContainer 
kvContainer)
 throw new StorageContainerException(msg, result);
   }
 
-  public Container importContainer(long containerID, long maxSize,
+public Container importContainer(long containerID, long maxSize,
{code}

The patch seems ready to go, :). Giving my +1.



was (Author: linyiqun):
The unit test didn't be executed actually. Looks like this was broken by 
HADOOP-14178, the version of Mockito was updated.

Ran the test {{TestKeyValueContainerMarkUnhealthy}} in my local, the error 
threw:
{noformat}
java.lang.NoSuchMethodError: 
org.mockito.internal.matchers.InstanceOf.(Ljava/lang/Class;Ljava/lang/String;)V
at org.mockito.ArgumentMatchers.anyList(ArgumentMatchers.java:478)
at 
org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainerMarkUnhealthy.setUp(TestKeyValueContainerMarkUnhealthy.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-30 Thread Nilotpal Nandi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755951#comment-16755951
 ] 

Nilotpal Nandi commented on HDDS-997:
-

thanks for reviewing the patch , [~msingh].

I have addressed the comments and uploaded a new patch.

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755956#comment-16755956
 ] 

Hadoop QA commented on HDDS-997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} root in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  6s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
16s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  4m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-997 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956873/HDDS-997.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  |
| uname | Linux 27b8afb81790 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / d583cc4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/branch-mvninstall-root.txt
 |
| shellcheck | v0.4.6 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/branch-javadoc-root.txt
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/testReport/ |
| Max. process+thread count | 132 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2137/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Updated] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1032:

Status: Open  (was: Patch Available)

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1032:

Status: Patch Available  (was: Open)

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756020#comment-16756020
 ] 

Hadoop QA commented on HDFS-14118:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 30s{color} | {color:orange} root: The patch generated 8 new + 108 unchanged 
- 0 fixed = 116 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}252m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956855/HDFS-14118.009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756027#comment-16756027
 ] 

Hadoop QA commented on HDFS-14118:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 27s{color} | {color:orange} root: The patch generated 8 new + 108 unchanged 
- 0 fixed = 116 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}254m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14118 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-1027) Add blockade Tests for datanode isolation and scm failures

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755988#comment-16755988
 ] 

Hadoop QA commented on HDDS-1027:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} root in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 
39s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 10s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
10s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956876/HDDS-1027.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 03d0bb8064ac 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / d583cc4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/artifact/out/branch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/artifact/out/branch-javadoc-root.txt
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/testReport/ |
| Max. process+thread count | 1139 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2138/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add blockade Tests for datanode isolation and scm failures
> --
>
> Key: HDDS-1027
> URL: https://issues.apache.org/jira/browse/HDDS-1027
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1027.001.patch, HDDS-1027.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14202:
-
Attachment: HDFS-14202.004.patch

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755983#comment-16755983
 ] 

Mukul Kumar Singh commented on HDDS-1032:
-

Thanks for working on this [~adoroszlai]. I am able to build the package after 
the patch.

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1032:
---

Assignee: Doroszlai, Attila  (was: Mukul Kumar Singh)

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756019#comment-16756019
 ] 

Hadoop QA commented on HDFS-14158:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956869/HDFS-14158-trunk-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fcb7b27c3b05 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d583cc4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26093/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Updated] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walter updated HDFS-14158:
---
Attachment: (was: HDFS-14158-trunk-002.patch)

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Timo Walter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walter updated HDFS-14158:
---
Attachment: HDFS-14158-trunk-002.patch

> Checkpointer ignores configured time period > 5 minutes
> ---
>
> Key: HDFS-14158
> URL: https://issues.apache.org/jira/browse/HDFS-14158
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Timo Walter
>Assignee: Timo Walter
>Priority: Minor
>  Labels: checkpoint, hdfs, namenode
> Attachments: 449.patch, HDFS-14158-trunk-001.patch, 
> HDFS-14158-trunk-002.patch
>
>
> The checkpointer always triggers a checkpoint every 5 minutes and ignores the 
> flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes.
> See the code below (in Checkpointer.java):
> {code:java}
> //Main work loop of the Checkpointer
> public void run() {
>   // Check the size of the edit log once every 5 minutes.
>   long periodMSec = 5 * 60;   // 5 minutes
>   if(checkpointConf.getPeriod() < periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
> If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 
> minutes, you choose use the configured one. But it always ignores it, if it's 
> greater than 5 minutes.
>  
> In my opinion, the if-expression should be:
> {code:java}
> if(checkpointConf.getPeriod() > periodMSec) {
> periodMSec = checkpointConf.getPeriod();
>   }
> {code}
>  
> Then "*dfs.namenode.checkpoint.period*" won't get ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756044#comment-16756044
 ] 

Elek, Marton commented on HDDS-1032:


+1

We need the mockito-core -> mockito-all changes only with the 3.2.0->3.3.0 
hadoop dependency upgrade.

Will commit to the trunk, soon.

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-956) MultipartUpload: List Parts for a Multipart upload key

2019-01-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756053#comment-16756053
 ] 

Elek, Marton commented on HDDS-956:
---

Sorry to check it just now, unfortunately it doesn't apply any more. 

Can you please rebase it, and I will check it immediately...

> MultipartUpload: List Parts for a Multipart upload key
> --
>
> Key: HDDS-956
> URL: https://issues.apache.org/jira/browse/HDDS-956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-956.00.patch, HDDS-956.01.patch, HDDS-956.02.patch
>
>
> This Jira is to implement backend to support API in S3 for list parts for an 
> object.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1032:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the trunk. Thanks [~adoroszlai] the quick fix.

> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-01-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756039#comment-16756039
 ] 

Elek, Marton commented on HDDS-891:
---

bq. Yup: HDDS-146, for example, changed start-build-env.sh (and introduced a 
bug)

[~aw] Please let me know if you have information about the mentioned bug.

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756097#comment-16756097
 ] 

Hudson commented on HDDS-1032:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15852 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15852/])
HDDS-1032. Package builds are failing with missing (elek: rev 
14441ccbc67f653d22cf40be98f3d31a054301e4)
* (edit) hadoop-hdds/framework/pom.xml
* (edit) hadoop-hdds/server-scm/pom.xml


> Package builds are failing with missing org.mockito:mockito-core dependency 
> version
> ---
>
> Key: HDDS-1032
> URL: https://issues.apache.org/jira/browse/HDDS-1032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1032.001.patch
>
>
> Package builds using "mvn package -Pdist -DskipTests -Dtar 
> -Dmaven.javadoc.skip=true -Phdds" are failing with the following error.
> {code}
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 36, column 17
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
> missing. @ line 89, column 17
>  @ 
> [ERROR] The build could not read 2 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project 
> org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
> error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 36, column 17
> [ERROR]   
> [ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
> (/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 
> 1 error
> [ERROR] 'dependencies.dependency.version' for 
> org.mockito:mockito-core:jar is missing. @ line 89, column 17
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2019-01-30 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Status: In Progress  (was: Patch Available)

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch, 
> HDFS-14084.012.patch, HDFS-14084.013.patch, HDFS-14084.014.patch, 
> HDFS-14084.015.patch, HDFS-14084.016.patch, HDFS-14084.017.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756362#comment-16756362
 ] 

Hadoop QA commented on HDDS-1031:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
27s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956920/HDDS-1031.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 27e2be5d82c5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2139/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/ |
| Max. process+thread count | 1143 (vs. ulimit of 1) |
| modules | C: hadoop-hdds hadoop-ozone U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2139/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> 

[jira] [Commented] (HDDS-956) MultipartUpload: List Parts for a Multipart upload key

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756410#comment-16756410
 ] 

Hadoop QA commented on HDDS-956:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} root: The patch generated 0 new + 2 unchanged - 36 
fixed = 2 total (was 38) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
54s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956924/HDDS-956.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 1870d4276991 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2140/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2140/testReport/ |
| Max. process+thread count | 1132 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2140/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: List Parts for a Multipart upload key
> --
>
> Key: HDDS-956
> URL: https://issues.apache.org/jira/browse/HDDS-956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-956.00.patch, HDDS-956.01.patch, HDDS-956.02.patch, 

[jira] [Updated] (HDDS-1012) Add Default CertificateClient implementation

2019-01-30 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1012:
-
Attachment: HDDS-1012.03.patch

> Add Default CertificateClient implementation
> 
>
> Key: HDDS-1012
> URL: https://issues.apache.org/jira/browse/HDDS-1012
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, 
> HDDS-1012.03.patch
>
>
> Add Default CertificateClient implementation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14244) hdfs++ doesn't add necessary libraries to dynamic library link

2019-01-30 Thread Owen O'Malley (JIRA)
Owen O'Malley created HDFS-14244:


 Summary: hdfs++ doesn't add necessary libraries to dynamic library 
link
 Key: HDFS-14244
 URL: https://issues.apache.org/jira/browse/HDFS-14244
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley


When linking with shared libraries, the libhdfs++ cmake file doesn't link 
correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1030) Move auditparser robot tests under ozone basic

2019-01-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756302#comment-16756302
 ] 

Elek, Marton commented on HDDS-1030:


+1 

I can confirm that it works well:

{code}
Basic.Auditparser :: Smoketest ozone cluster startup  
==
Initiating freon to generate data | PASS |
--
Testing audit parser  | PASS |
--
Basic.Auditparser :: Smoketest ozone cluster startup  | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
{code}

Will commit to the trunk soon...

> Move auditparser robot tests under ozone basic
> --
>
> Key: HDDS-1030
> URL: https://issues.apache.org/jira/browse/HDDS-1030
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1030.00.patch
>
>
> Based on [review 
> comment|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16753848=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16753848]
>  from [~elek] in HDDS-1007, this Jira aims to move the audit parser robot 
> test to basic tests folder so that it can use ozone env.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1030) Move auditparser robot tests under ozone basic

2019-01-30 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1030:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed to the trunk. Thank you [~dineshchitlangia] the quick fix.

> Move auditparser robot tests under ozone basic
> --
>
> Key: HDDS-1030
> URL: https://issues.apache.org/jira/browse/HDDS-1030
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1030.00.patch
>
>
> Based on [review 
> comment|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16753848=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16753848]
>  from [~elek] in HDDS-1007, this Jira aims to move the audit parser robot 
> test to basic tests folder so that it can use ozone env.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-922) Create isolated classloder to use ozonefs with any older hadoop versions

2019-01-30 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-922:
--
Status: In Progress  (was: Patch Available)

> Create isolated classloder to use ozonefs with any older hadoop versions
> 
>
> Key: HDDS-922
> URL: https://issues.apache.org/jira/browse/HDDS-922
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-922.001.patch, HDDS-922.002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now we create a shaded ozonefs artifact which includes all the required 
> class files to use ozonefs (Hadoop compatible file system for Ozone)
> But the shading process of this artifact is very easy, it includes all the 
> class files but no relocation rules (package name renaming) are configured. 
> With this approach ozonefs can be used from the compatible hadoop version 
> (this is hadoop 3.1 only, I guess) but can't be used with any older hadoop 
> version as it requires the newer version of hadoop-common.
> I tried to configure a full shading (with relocation) but it's not a simple 
> task. For example a pure (non-relocated) Configuration is required by the 
> ozonefs itself, but an other, newer Configuration class is required by the 
> ozone client code which is a dependency of OzoneFileSystem So we need a 
> relocated and a non-relocated class in the same time.
> I tried out a different approach: I moved out all of the ozone specific 
> classes from the OzoneFileSystem to an adapter class (OzoneClientAdapter). In 
> case of an older hadoop version the adapter class itself can be loaded with 
> an isolated classloader. The isolated classloader can load all the required 
> classes from the jar file from a specific path. It doesn't require any 
> specific package relocation as the default class loader doesn't load these 
> classes. 
> The OzoneFileSystem (in case of older hadoop version) can load the adapter 
> with the isolated classloader and only a few classes should be shared between 
> the normal and isolated classloader (the interface of the adapter and the 
> types in the method signatures). All of the other ozone classes and the newer 
> hadoop dependencies will be hidden by the isolated classloader.
> This patch is more like a proof of concept, I would like to start a 
> discussion about this approach. I successfully used the generated artifact to 
> use ozonefs from spark 2.4 default distribution (which includes hadoop 2.7). 
> For a final patch I would add some check to use the ozonefs without any 
> classpath separation by default. (could be configured or chosen by 
> automatically)
> For using spark (+ hadoop 2.7 + kubernetes scheduler) together with ozone, 
> you can check this screencast: 
> https://www.youtube.com/watch?v=cpRJcSHIEdM=8s



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2019-01-30 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756349#comment-16756349
 ] 

Pranay Singh commented on HDFS-14084:
-

Uploaded the new PATCH, HDFS-14084.017 

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch, 
> HDFS-14084.012.patch, HDFS-14084.013.patch, HDFS-14084.014.patch, 
> HDFS-14084.015.patch, HDFS-14084.016.patch, HDFS-14084.017.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1016:

Attachment: HDDS-1016.03.patch

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch, 
> HDDS-1016.03.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1015) Cleanup snapshot repository settings

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756335#comment-16756335
 ] 

Bharat Viswanadham commented on HDDS-1015:
--

Yes [~elek]

This was added as part of HDDS-702. With out those changes also I am able to 
compile and Jenkins also +1ed. Now with your question, I got a question how 
this is working.

 

But when I run and checked the output, this is able to download ratis info from 
snapshot repo. I think apache snapshot ratis repo might be a public link from 
where maven can download the jars, so there is no need to specify repository. 
(But not able to find any information to confirm this)

 

*Downloading: 
https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*
*Downloaded: 
https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*

> Cleanup snapshot repository settings
> 
>
> Key: HDDS-1015
> URL: https://issues.apache.org/jira/browse/HDDS-1015
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1015.00.patch
>
>
> Now we can clean up snapshot repository settings from hadoop-hdds/pom.xml and 
> hadoop-ozone/pom.xml
> As now we have moved our dependencies from Hadoop 3.2.1-SNAPSHOT to 3.2.0 as 
> part of HDDS-993, we don't require them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2019-01-30 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Attachment: HDFS-14084.017.patch

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch, 
> HDFS-14084.012.patch, HDFS-14084.013.patch, HDFS-14084.014.patch, 
> HDFS-14084.015.patch, HDFS-14084.016.patch, HDFS-14084.017.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2019-01-30 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Status: Patch Available  (was: In Progress)

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch, 
> HDFS-14084.012.patch, HDFS-14084.013.patch, HDFS-14084.014.patch, 
> HDFS-14084.015.patch, HDFS-14084.016.patch, HDFS-14084.017.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756385#comment-16756385
 ] 

Arpit Agarwal commented on HDDS-1016:
-

Thanks [~linyiqun]. Nice catch on the whitespace issues, fixed them in the v03 
patch.

I built and reran tests locally and they passed. Hopefully we will get a clean 
run this time from Jenkins.

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch, 
> HDDS-1016.03.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756404#comment-16756404
 ] 

Bharat Viswanadham commented on HDDS-1029:
--

Updated the patch to add some test cases in TestDeleteContainerHandler to test 
this flag.

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch
>
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14158) Checkpointer ignores configured time period > 5 minutes

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756200#comment-16756200
 ] 

Hadoop QA commented on HDFS-14158:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956889/HDFS-14158-trunk-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 186c28232a3c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d583cc4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26094/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDDS-956) MultipartUpload: List Parts for a Multipart upload key

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756357#comment-16756357
 ] 

Bharat Viswanadham commented on HDDS-956:
-

Thank You, [~elek] for checking out.

Attached a rebased patch.

> MultipartUpload: List Parts for a Multipart upload key
> --
>
> Key: HDDS-956
> URL: https://issues.apache.org/jira/browse/HDDS-956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-956.00.patch, HDDS-956.01.patch, HDDS-956.02.patch, 
> HDDS-956.03.patch
>
>
> This Jira is to implement backend to support API in S3 for list parts for an 
> object.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-956) MultipartUpload: List Parts for a Multipart upload key

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-956:

Attachment: HDDS-956.03.patch

> MultipartUpload: List Parts for a Multipart upload key
> --
>
> Key: HDDS-956
> URL: https://issues.apache.org/jira/browse/HDDS-956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-956.00.patch, HDDS-956.01.patch, HDDS-956.02.patch, 
> HDDS-956.03.patch
>
>
> This Jira is to implement backend to support API in S3 for list parts for an 
> object.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14244) hdfs++ doesn't add necessary libraries to dynamic library link

2019-01-30 Thread Owen O'Malley (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-14244:
-
Component/s: hdfs-client
 hdfs++

> hdfs++ doesn't add necessary libraries to dynamic library link
> --
>
> Key: HDFS-14244
> URL: https://issues.apache.org/jira/browse/HDFS-14244
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs++, hdfs-client
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>
> When linking with shared libraries, the libhdfs++ cmake file doesn't link 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756356#comment-16756356
 ] 

Arpit Agarwal commented on HDDS-1031:
-

+1 pending Jenkins.

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)
> {code}
>  



--
This message was sent by Atlassian 

[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756426#comment-16756426
 ] 

Íñigo Goiri commented on HDFS-14202:


Can we also clarify 21936966?
For the assert, can also check for a particular number instead of <= 8000?

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1029:
-
Attachment: HDDS-1029.01.patch

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch
>
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1031:
-
Attachment: HDDS-1031.00.patch

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)
> {code}
>  



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1031:
-
Status: Patch Available  (was: Open)

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)
> {code}
>  



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDDS-1030) Move auditparser robot tests under ozone basic

2019-01-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756322#comment-16756322
 ] 

Hudson commented on HDDS-1030:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15854 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15854/])
HDDS-1030. Move auditparser robot tests under ozone basic. Contributed (elek: 
rev 0e95ae402ce108c88cb66b3d8e18df1943b0ff33)
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh
* (delete) hadoop-ozone/dist/src/main/smoketest/auditparser/parser.robot
* (add) hadoop-ozone/dist/src/main/smoketest/basic/auditparser.robot


> Move auditparser robot tests under ozone basic
> --
>
> Key: HDDS-1030
> URL: https://issues.apache.org/jira/browse/HDDS-1030
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1030.00.patch
>
>
> Based on [review 
> comment|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16753848=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16753848]
>  from [~elek] in HDDS-1007, this Jira aims to move the audit parser robot 
> test to basic tests folder so that it can use ozone env.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1015) Cleanup snapshot repository settings

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756335#comment-16756335
 ] 

Bharat Viswanadham edited comment on HDDS-1015 at 1/30/19 5:11 PM:
---

Yes [~elek]

This was added as part of HDDS-702. With out those changes also I am able to 
compile and Jenkins also +1ed. (Even before this patch, we used to ratis 
snapshot version, but we dont have added repository settings in pom.xml) Now 
with your question, I got a question how this is working.

 

But when I run and checked the output, this is able to download ratis info from 
snapshot repo. I think apache snapshot ratis repo might be a public link from 
where maven can download the jars, so there is no need to specify repository. 
(But not able to find any information to confirm this)

 

*Downloading: 
[https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*]
*Downloaded: 
[https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*]


was (Author: bharatviswa):
Yes [~elek]

This was added as part of HDDS-702. With out those changes also I am able to 
compile and Jenkins also +1ed. Now with your question, I got a question how 
this is working.

 

But when I run and checked the output, this is able to download ratis info from 
snapshot repo. I think apache snapshot ratis repo might be a public link from 
where maven can download the jars, so there is no need to specify repository. 
(But not able to find any information to confirm this)

 

*Downloading: 
https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*
*Downloaded: 
https://repository.apache.org/snapshots/org/apache/ratis/ratis/0.4.0-a8c4ca0-SNAPSHOT/maven-metadata.xml*

> Cleanup snapshot repository settings
> 
>
> Key: HDDS-1015
> URL: https://issues.apache.org/jira/browse/HDDS-1015
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1015.00.patch
>
>
> Now we can clean up snapshot repository settings from hadoop-hdds/pom.xml and 
> hadoop-ozone/pom.xml
> As now we have moved our dependencies from Hadoop 3.2.1-SNAPSHOT to 3.2.0 as 
> part of HDDS-993, we don't require them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-549) Add support for key rename in Ozone Shell

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756365#comment-16756365
 ] 

Bharat Viswanadham commented on HDDS-549:
-

Hi [~adoroszlai]

Thank You for the improvement. Overall patch LGTM.

Can we add some tests in TestOzoneShell also for this new command?

> Add support for key rename in Ozone Shell
> -
>
> Key: HDDS-549
> URL: https://issues.apache.org/jira/browse/HDDS-549
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Namit Maheshwari
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-549.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1012) Add Default CertificateClient implementation

2019-01-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756409#comment-16756409
 ] 

Ajay Kumar commented on HDDS-1012:
--

[~anu] thanks for review. 

{quote}1. BlockToken.java:92: This code should not throw if this needs to work. 
Not part of this patch.
2. BlockToken.java: Line 99, if we allow this function to throw IOExecption, 
then we can remove all kinds of try catch completely. In fact, this function 
will have no code modification AFAIK.{quote}
Reverted all changes.
{quote}3. DefaultCertificateClient.javaE#init – Think that bit based coding is 
very fragile and hard to understand. If you want to create a mapping like this, 
please add an enum and then handle that in the switch case. It is hard to read, 
case 0... especially when the good and bad cases are mixed all over in the bit 
map. Please consider rewriting, so suggestions, break them into functions{quote}
Done. enums are more readable but refrained from it in last patch as they might 
not convey the complete picture. With int based cases one can directly see the 
corresponding case in truth table.
{quote}4. init: I don't think code path is possible at all.
{quote}
I think you mean the null check and corresponding throw , corrected in latest 
patch.
{quote}5. It is very hard to understand how this code will be used if there is 
no use of this code in the patch. Can we have a patch, which focuses on a use 
case, so that we can understand the security issues in the context? I am not 
able to see/understand how the init function will be used?{quote}
Not sure if i understand this suggestion completely. Like [HDDS-955], this 
patch adds default implementation. Init function will be called by SCM 
certificate clients(datanodes and ozonemanager) to initialize and bootstrap. 
{quote}6. KeyCodec.java: My Editor complains that it is duplicate code, Can you 
please take a look?{quote}
Two new api are added to store private and public key separately, existing one 
stores keypair's.
{quote}7. OzoneConfigKeys.java: I think it is a security hole to create default 
passwords in code. Most users will not know about this, and all you need one 
bad guy to misuse this. We should just store normal .pem files in directories 
protected by the file system. This hardcoded default password adds no extra 
level security over the file system permissions. Once we move to that model, 
the changes in SecurityConfig are superfluous too.{quote}
Agee, Had a todo to store it in KMS or pem file as you suggested. But since 
current patch doesn't implement KeyStore/CertStore related functionality i have 
removed it from current patch.

> Add Default CertificateClient implementation
> 
>
> Key: HDDS-1012
> URL: https://issues.apache.org/jira/browse/HDDS-1012
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, 
> HDDS-1012.03.patch
>
>
> Add Default CertificateClient implementation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14172) Return a default SectionName to avoid NPE

2019-01-30 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756173#comment-16756173
 ] 

Adam Antal commented on HDFS-14172:
---

Thanks for the patch, [~water].
I did not see the comparator is so broken. This could indeed be an incompatible 
change, since we are modifying the order of the section. I know that it is bad, 
but it is working consistently, and it'd be dangerous to touch that part. So 
I'd conclude not to modify {{FSImageFormatProtobuf.SectionName#fromString()}}.

Otherwise I think it is not an incompatible change, since as it is in trunk 
currently - we can't handle unknown sections anyways, as the issue states: a 
NPE is thrown. But if we'd detect the null (=unknown section) and throw a 
descriptive IOException in {{FSImageFormatProtobuf.Loader#loadInternal()}}, 
everybody would be satisfied. 
The thing is that the code now throws an NPE, when an unknown section is coming 
across. We should not resolve that, because although the NPE itself is not 
intended, the error, and to abort the program is. 

> Return a default SectionName to avoid NPE
> -
>
> Key: HDFS-14172
> URL: https://issues.apache.org/jira/browse/HDFS-14172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HADOOP-14172.000.patch, HADOOP-14172.001.patch
>
>
> In FSImageFormatProtobuf.SectionName#fromString(), as follows:
> {code:java}
> public static SectionName fromString(String name) {
>   for (SectionName n : values) {
> if (n.name.equals(name))
>   return n;
>   }
>   return null;
> }
> {code}
> When the code meets an unknown section from the fsimage, the function will 
> return null. Callers always operates the return value with a "switch" clause, 
> like FSImageFormatProtobuf.Loader#loadInternal(), as:
> {code:java}
> switch (SectionName.fromString(n))
> {code}
> NPE will be thrown here.
> For self-protection, shall we add a default section name in the enum of 
> SectionName, like "UNKNOWN", to steer clear of NPE?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756250#comment-16756250
 ] 

Hadoop QA commented on HDFS-14202:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14202 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956895/HDFS-14202.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 68932cc9fbb0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 14441cc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26095/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26095/testReport/ |
| Max. process+thread count | 3143 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDDS-1030) Move auditparser robot tests under ozone basic

2019-01-30 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756313#comment-16756313
 ] 

Dinesh Chitlangia commented on HDDS-1030:
-

Thanks [~elek] for highlighting the issue, review and commit! 

> Move auditparser robot tests under ozone basic
> --
>
> Key: HDDS-1030
> URL: https://issues.apache.org/jira/browse/HDDS-1030
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1030.00.patch
>
>
> Based on [review 
> comment|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16753848=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16753848]
>  from [~elek] in HDDS-1007, this Jira aims to move the audit parser robot 
> test to basic tests folder so that it can use ozone env.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756360#comment-16756360
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~elgoiri], I have updated the patch. please check once.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-01-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756387#comment-16756387
 ] 

Íñigo Goiri commented on HDFS-14230:


Thanks [~ferhui] for switching to retriable.
Some comments:
* Make NoNamenodesAvailableException have a constructor which only takes the 
nsId and the original ioe. In that way you can do {{new 
NoNamenodesAvailableException(nsId, ioe);}} and generate the message inside.
* I think you could just create the {{RetriableException}} from it: {{new 
RetriableException(ioe);}}
* Add a counter to track this failure.
* Add a javadoc to the new test explaining you are simulating a failover.
* I think that the mini dfs cluster already offer ways to do the transition to 
active/standby.
* Can you make the code that you are using to switch the NN to active into a 
function? Then probably you can use some lambda fanciness to start the function 
in a less verbose way. BTW, not a big fan of sleeping 10 seconds. We may want 
to do something smarter. Actually you can try with all standby, check counters, 
put one as active and check how it succeeds.
* Check for the new counter added according to my previous comment.

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> 

[jira] [Created] (HDDS-1034) TestOzoneRpcClient failure

2019-01-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1034:


 Summary: TestOzoneRpcClient failure
 Key: HDDS-1034
 URL: https://issues.apache.org/jira/browse/HDDS-1034
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Sometimes on Jenkins run, we see the test testPutKey failing with below error.

See below Jenkins run, not only this other tests like 
testReadKeyWithCorruptedData, 
testMultipartUploadWithPartsMisMatchWithIncorrectPartName all fail with the 
same error when writing key.

https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/
{code:java}
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
 at 
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
 at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:488)
 at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:321)
 at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
 at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
 at java.io.OutputStream.write(OutputStream.java:75) at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:557)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) 
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
Caused by: java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:270)
 at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:324)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:602)
 ... 38 more Caused by: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception at 
org.apache.ratis.thirdparty.io.grpc.Status.asRuntimeException(Status.java:526) 
at 

[jira] [Commented] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756470#comment-16756470
 ] 

Arpit Agarwal commented on HDDS-1016:
-

All three failed unit tests passed locally. I will commit the v03 patch 
shortly. Thanks again [~linyiqun] for all the code reviews!

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch, 
> HDDS-1016.03.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1034) TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1034:
-
Summary: TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure  (was: 
TestOzoneRpcClient failure)

> TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure
> --
>
> Key: HDDS-1034
> URL: https://issues.apache.org/jira/browse/HDDS-1034
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Sometimes on Jenkins run, we see the test testPutKey failing with below error.
> See below Jenkins run, not only this other tests like 
> testReadKeyWithCorruptedData, 
> testMultipartUploadWithPartsMisMatchWithIncorrectPartName all fail with the 
> same error when writing key.
> https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:488)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:321)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at java.io.OutputStream.write(OutputStream.java:75) at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:557)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> 

[jira] [Updated] (HDDS-1034) TestOzoneRpcClient failure

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1034:
-
Target Version/s: 0.4.0

> TestOzoneRpcClient failure
> --
>
> Key: HDDS-1034
> URL: https://issues.apache.org/jira/browse/HDDS-1034
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Sometimes on Jenkins run, we see the test testPutKey failing with below error.
> See below Jenkins run, not only this other tests like 
> testReadKeyWithCorruptedData, 
> testMultipartUploadWithPartsMisMatchWithIncorrectPartName all fail with the 
> same error when writing key.
> https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:488)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:321)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at java.io.OutputStream.write(OutputStream.java:75) at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:557)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:270)
>  at 
> 

[jira] [Commented] (HDDS-1012) Add Default CertificateClient implementation

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756443#comment-16756443
 ] 

Hadoop QA commented on HDDS-1012:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m  
6s{color} | {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} root: The patch generated 4 new + 0 unchanged - 
1 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 20s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 55s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1012 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956930/HDDS-1012.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 883e0ba05d24 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/artifact/out/patch-mvninstall-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/testReport/ |
| Max. process+thread count | 103 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2143/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add Default CertificateClient implementation
> 
>
> Key: HDDS-1012
> URL: https://issues.apache.org/jira/browse/HDDS-1012
> 

[jira] [Commented] (HDDS-549) Add support for key rename in Ozone Shell

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756511#comment-16756511
 ] 

Hadoop QA commented on HDDS-549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  4s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
55s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956941/HDDS-549.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 2bfe9ce2751f 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7456fc9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2145/testReport/ |
| Max. process+thread count | 190 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs hadoop-ozone/dist hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2145/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for key rename in Ozone Shell
> -
>
> Key: HDDS-549
> URL: https://issues.apache.org/jira/browse/HDDS-549
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Namit Maheshwari
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-549.001.patch, HDDS-549.002.patch, 
> HDDS-549.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756605#comment-16756605
 ] 

Hadoop QA commented on HDDS-1035:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 11s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
34s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956958/HDDS-1035.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 5e7f655561cc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / c354195 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2147/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2147/testReport/ |
| Max. process+thread count | 1141 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2147/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Intermittent TestRootList failure
> -
>
> Key: HDDS-1035
> URL: https://issues.apache.org/jira/browse/HDDS-1035
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
> Attachments: HDDS-1035.001.patch
>
>
> {{TestRootList}} fails intermittently in pre-commit check:
> 

[jira] [Updated] (HDDS-549) Add support for key rename in Ozone Shell

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-549:
---
Attachment: HDDS-549.002.patch

> Add support for key rename in Ozone Shell
> -
>
> Key: HDDS-549
> URL: https://issues.apache.org/jira/browse/HDDS-549
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Namit Maheshwari
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-549.001.patch, HDDS-549.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756458#comment-16756458
 ] 

Bharat Viswanadham commented on HDDS-1031:
--

Hi [~arpitagarwal]

I have missed this comment and committed the patch.

The same error is seen on all the other test case failure too. Not sure this 
appear's some time. Need to dig in to find out the root cause. I will run the 
test locally to confirm whether it is passing or not.

 

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> 

[jira] [Comment Edited] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756454#comment-16756454
 ] 

Arpit Agarwal edited comment on HDDS-1031 at 1/30/19 7:19 PM:
--

TestOzoneRpcClientWithRatis - is this test failure related to the patch?


was (Author: arpitagarwal):
TestOzoneRpcClientWithRatis - is this teast failure related to the patch?

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> 

[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756481#comment-16756481
 ] 

Hudson commented on HDDS-1031:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15855/])
HDDS-1031. Update ratis version to fix a DN restart Bug. Contributed by 
(bharat: rev 7456fc99ee01562b92d36e56b93081a0d7af6514)
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-hdds/pom.xml


> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1031.00.patch, Screen Shot 2019-01-30 at 11.22.41 
> AM.png
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         

[jira] [Updated] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1029:
-
Attachment: HDDS-1029.02.patch

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch
>
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1016:

  Resolution: Fixed
   Fix Version/s: 0.4.0
Target Version/s:   (was: 0.4.0)
  Status: Resolved  (was: Patch Available)

> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch, 
> HDDS-1016.03.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756433#comment-16756433
 ] 

Íñigo Goiri commented on HDFS-14118:


[~fengnanli] I have a few small nits that are easy to do in a patch than in a 
comment review.
Do you mind if I upload a patch with my proposal?

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756454#comment-16756454
 ] 

Arpit Agarwal commented on HDDS-1031:
-

TestOzoneRpcClientWithRatis - is this teast failure related to the patch?

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)

[jira] [Commented] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756444#comment-16756444
 ] 

Hadoop QA commented on HDDS-1016:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 27s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
25s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.container.TestContainerReplication |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956928/HDDS-1016.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 91f0a80f0bca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2141/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2141/testReport/ |
| Max. process+thread count | 1136 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2141/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1016.01.patch, 

[jira] [Updated] (HDDS-549) Add support for key rename in Ozone Shell

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-549:
---
Attachment: HDDS-549.003.patch

> Add support for key rename in Ozone Shell
> -
>
> Key: HDDS-549
> URL: https://issues.apache.org/jira/browse/HDDS-549
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Namit Maheshwari
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-549.001.patch, HDDS-549.002.patch, 
> HDDS-549.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756467#comment-16756467
 ] 

Bharat Viswanadham commented on HDDS-1031:
--

Ran test locally, all the tests are passing.

!Screen Shot 2019-01-30 at 11.22.41 AM.png!

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch, Screen Shot 2019-01-30 at 11.22.41 
> AM.png
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> 

[jira] [Updated] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1031:
-
Fix Version/s: 0.4.0

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1031.00.patch, Screen Shot 2019-01-30 at 11.22.41 
> AM.png
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)
> 

[jira] [Updated] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1035:

Attachment: HDDS-1035.001.patch

> Intermittent TestRootList failure
> -
>
> Key: HDDS-1035
> URL: https://issues.apache.org/jira/browse/HDDS-1035
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
> Attachments: HDDS-1035.001.patch
>
>
> {{TestRootList}} fails intermittently in pre-commit check:
> {code:title=https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt}
> [INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 
> s <<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  
> Time elapsed: 0.106 s  <<< ERROR!
> java.io.IOException: BUCKET_ALREADY_EXISTS
>   at 
> org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
>   at 
> org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
> {code}
> Other examples: 
> [1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
>  
> [2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
>  
> [3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756560#comment-16756560
 ] 

Hadoop QA commented on HDDS-1029:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
25s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956950/HDDS-1029.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 347ab2bee7ac 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / c354195 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2146/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2146/testReport/ |
| Max. process+thread count | 1136 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2146/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: 

[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756564#comment-16756564
 ] 

Hadoop QA commented on HDFS-14084:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 25s{color} | {color:orange} root: The patch generated 2 new + 132 unchanged 
- 2 fixed = 134 total (was 134) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}100m  
6s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956922/HDFS-14084.017.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a90fbf38d70b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Comment Edited] (HDDS-1012) Add Default CertificateClient implementation

2019-01-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756409#comment-16756409
 ] 

Ajay Kumar edited comment on HDDS-1012 at 1/30/19 6:56 PM:
---

[~anu] thanks for review. 

{quote}1. BlockToken.java:92: This code should not throw if this needs to work. 
Not part of this patch.
2. BlockToken.java: Line 99, if we allow this function to throw IOExecption, 
then we can remove all kinds of try catch completely. In fact, this function 
will have no code modification AFAIK.{quote}
Reverted changes.
{quote}3. DefaultCertificateClient.javaE#init – Think that bit based coding is 
very fragile and hard to understand. If you want to create a mapping like this, 
please add an enum and then handle that in the switch case. It is hard to read, 
case 0... especially when the good and bad cases are mixed all over in the bit 
map. Please consider rewriting, so suggestions, break them into functions{quote}
Done. enums are more readable but refrained from it in last patch as they might 
not convey the complete picture. With int based cases one can directly see the 
corresponding case in truth table.
{quote}4. init: I don't think code path is possible at all.
{quote}
I think you mean the null check and corresponding throw , corrected in latest 
patch.
{quote}5. It is very hard to understand how this code will be used if there is 
no use of this code in the patch. Can we have a patch, which focuses on a use 
case, so that we can understand the security issues in the context? I am not 
able to see/understand how the init function will be used?{quote}
Not sure if i understand this suggestion completely. Like [HDDS-955], this 
patch adds default implementation. Init function will be called by SCM 
certificate clients(datanodes and ozonemanager) to initialize and bootstrap. 
{quote}6. KeyCodec.java: My Editor complains that it is duplicate code, Can you 
please take a look?{quote}
Two new api are added to store private and public key separately, existing one 
stores keypair's.
{quote}7. OzoneConfigKeys.java: I think it is a security hole to create default 
passwords in code. Most users will not know about this, and all you need one 
bad guy to misuse this. We should just store normal .pem files in directories 
protected by the file system. This hardcoded default password adds no extra 
level security over the file system permissions. Once we move to that model, 
the changes in SecurityConfig are superfluous too.{quote}
Agee, Had a todo to store it in KMS or pem file as you suggested. But since 
current patch doesn't implement KeyStore/CertStore related functionality i have 
removed it from current patch.


was (Author: ajayydv):
[~anu] thanks for review. 

{quote}1. BlockToken.java:92: This code should not throw if this needs to work. 
Not part of this patch.
2. BlockToken.java: Line 99, if we allow this function to throw IOExecption, 
then we can remove all kinds of try catch completely. In fact, this function 
will have no code modification AFAIK.{quote}
Reverted all changes.
{quote}3. DefaultCertificateClient.javaE#init – Think that bit based coding is 
very fragile and hard to understand. If you want to create a mapping like this, 
please add an enum and then handle that in the switch case. It is hard to read, 
case 0... especially when the good and bad cases are mixed all over in the bit 
map. Please consider rewriting, so suggestions, break them into functions{quote}
Done. enums are more readable but refrained from it in last patch as they might 
not convey the complete picture. With int based cases one can directly see the 
corresponding case in truth table.
{quote}4. init: I don't think code path is possible at all.
{quote}
I think you mean the null check and corresponding throw , corrected in latest 
patch.
{quote}5. It is very hard to understand how this code will be used if there is 
no use of this code in the patch. Can we have a patch, which focuses on a use 
case, so that we can understand the security issues in the context? I am not 
able to see/understand how the init function will be used?{quote}
Not sure if i understand this suggestion completely. Like [HDDS-955], this 
patch adds default implementation. Init function will be called by SCM 
certificate clients(datanodes and ozonemanager) to initialize and bootstrap. 
{quote}6. KeyCodec.java: My Editor complains that it is duplicate code, Can you 
please take a look?{quote}
Two new api are added to store private and public key separately, existing one 
stores keypair's.
{quote}7. OzoneConfigKeys.java: I think it is a security hole to create default 
passwords in code. Most users will not know about this, and all you need one 
bad guy to misuse this. We should just store normal .pem files in directories 
protected by the file system. This hardcoded default password adds no extra 
level security over the file system permissions. Once we 

[jira] [Commented] (HDDS-1016) Allow marking containers as unhealthy

2019-01-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756509#comment-16756509
 ] 

Hudson commented on HDDS-1016:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15856 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15856/])
HDDS-1016. Allow marking containers as unhealthy. Contributed by Arpit (arp: 
rev c35419579b5c5b315c5b62d8b89149924416b480)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMarkUnhealthy.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandlerWithUnhealthyContainer.java


> Allow marking containers as unhealthy
> -
>
> Key: HDDS-1016
> URL: https://issues.apache.org/jira/browse/HDDS-1016
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1016.01.patch, HDDS-1016.02.patch, 
> HDDS-1016.03.patch
>
>
> Containers support an unhealthy state but currently the Container interface 
> on the DataNodes does not expose a way to mark containers as unhealthy.
> -We can also make a few locking improvements to the KeyValueContainer class.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756496#comment-16756496
 ] 

Hadoop QA commented on HDDS-1029:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 42s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
7s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956929/HDDS-1029.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 9e86883328fc 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2142/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2142/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2142/testReport/ |
| Max. process+thread count | 1114 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2142/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>   

[jira] [Created] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1035:
---

 Summary: Intermittent TestRootList failure
 Key: HDDS-1035
 URL: https://issues.apache.org/jira/browse/HDDS-1035
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{TestRootList}} fails intermittently in pre-commit check:

{code:title=https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt}
[INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 s 
<<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  Time 
elapsed: 0.106 s  <<< ERROR!
java.io.IOException: BUCKET_ALREADY_EXISTS
at 
org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
at 
org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
{code}

Other examples: 
[1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
 
[2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
 
[3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1035:

Description: 
{{TestRootList}} fails intermittently in pre-commit check:

[https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt]
{code:java}
[INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 s 
<<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  Time 
elapsed: 0.106 s  <<< ERROR!
java.io.IOException: BUCKET_ALREADY_EXISTS
at 
org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
at 
org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
{code}
Other examples: 1 2 3

  was:
{{TestRootList}} fails intermittently in pre-commit check:

{code:title=https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt}
[INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 s 
<<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  Time 
elapsed: 0.106 s  <<< ERROR!
java.io.IOException: BUCKET_ALREADY_EXISTS
at 
org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
at 
org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
{code}

Other examples: 
[1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
 
[2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
 
[3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
...


> Intermittent TestRootList failure
> -
>
> Key: HDDS-1035
> URL: https://issues.apache.org/jira/browse/HDDS-1035
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
> Attachments: HDDS-1035.001.patch
>
>
> {{TestRootList}} fails intermittently in pre-commit check:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt]
> {code:java}
> [INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 
> s <<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  
> Time elapsed: 0.106 s  <<< ERROR!
> java.io.IOException: BUCKET_ALREADY_EXISTS
>   at 
> org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
>   at 
> org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
> {code}
> Other examples: 1 2 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756449#comment-16756449
 ] 

Bharat Viswanadham commented on HDDS-1031:
--

Thank You [~arpitagarwal] for review.

I will commit this shortly.

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> 

[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-30 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756450#comment-16756450
 ] 

Fengnan Li commented on HDFS-14118:
---

[~elgoiri] Not at all, please go ahead.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756472#comment-16756472
 ] 

Bharat Viswanadham commented on HDDS-1031:
--

Opened HDDS-1034 to fix this randomly failing test issue.

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1031.00.patch, Screen Shot 2019-01-30 at 11.22.41 
> AM.png
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> 

[jira] [Commented] (HDDS-549) Add support for key rename in Ozone Shell

2019-01-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756516#comment-16756516
 ] 

Hadoop QA commented on HDDS-549:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m  
5s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
4s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956940/HDDS-549.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux fa8c9e980881 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0e95ae4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2144/artifact/out/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2144/testReport/ |
| Max. process+thread count | 1100 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs hadoop-ozone/dist hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2144/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for key rename in Ozone Shell
> -
>
> Key: HDDS-549
> URL: https://issues.apache.org/jira/browse/HDDS-549
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Namit Maheshwari
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-549.001.patch, HDDS-549.002.patch, 
> HDDS-549.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1035:

Status: Patch Available  (was: Open)

> Intermittent TestRootList failure
> -
>
> Key: HDDS-1035
> URL: https://issues.apache.org/jira/browse/HDDS-1035
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
> Attachments: HDDS-1035.001.patch
>
>
> {{TestRootList}} fails intermittently in pre-commit check:
> {code:title=https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt}
> [INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 
> s <<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  
> Time elapsed: 0.106 s  <<< ERROR!
> java.io.IOException: BUCKET_ALREADY_EXISTS
>   at 
> org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
>   at 
> org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
> {code}
> Other examples: 
> [1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
>  
> [2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
>  
> [3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1035) Intermittent TestRootList failure

2019-01-30 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1035:

Description: 
{{TestRootList}} fails intermittently in pre-commit check:

[https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt]
{code:java}
[INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 s 
<<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  Time 
elapsed: 0.106 s  <<< ERROR!
java.io.IOException: BUCKET_ALREADY_EXISTS
at 
org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
at 
org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
{code}
Other examples: 
[1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
 
[2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
 
[3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
 

  was:
{{TestRootList}} fails intermittently in pre-commit check:

[https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt]
{code:java}
[INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 s 
<<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
[ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  Time 
elapsed: 0.106 s  <<< ERROR!
java.io.IOException: BUCKET_ALREADY_EXISTS
at 
org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
at 
org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
{code}
Other examples: 1 2 3


> Intermittent TestRootList failure
> -
>
> Key: HDDS-1035
> URL: https://issues.apache.org/jira/browse/HDDS-1035
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
> Attachments: HDDS-1035.001.patch
>
>
> {{TestRootList}} fails intermittently in pre-commit check:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/2145/artifact/out/patch-unit-hadoop-ozone.txt]
> {code:java}
> [INFO] Running org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.179 
> s <<< FAILURE! - in org.apache.hadoop.ozone.s3.endpoint.TestRootList
> [ERROR] testListBucket(org.apache.hadoop.ozone.s3.endpoint.TestRootList)  
> Time elapsed: 0.106 s  <<< ERROR!
> java.io.IOException: BUCKET_ALREADY_EXISTS
>   at 
> org.apache.hadoop.ozone.client.ObjectStoreStub.createS3Bucket(ObjectStoreStub.java:121)
>   at 
> org.apache.hadoop.ozone.s3.endpoint.TestRootList.testListBucket(TestRootList.java:67)
> {code}
> Other examples: 
> [1|https://issues.apache.org/jira/browse/HDDS-573?focusedCommentId=16659316=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16659316]
>  
> [2|https://issues.apache.org/jira/browse/HDDS-955?focusedCommentId=16733532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16733532]
>  
> [3|https://issues.apache.org/jira/browse/HDDS-1024?focusedCommentId=16754446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16754446]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1031:
-
Attachment: Screen Shot 2019-01-30 at 11.22.41 AM.png

> Update ratis version to fix a DN restart Bug
> 
>
> Key: HDDS-1031
> URL: https://issues.apache.org/jira/browse/HDDS-1031
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1031.00.patch, Screen Shot 2019-01-30 at 11.22.41 
> AM.png
>
>
> This is related to RATIS-460.
> When datanode is restarted, after ratis has taken a snapshot, we see below 
> stack trace, and DN won't boot up. For more info refer RATIS-460
>  
> {code:java}
> java.io.IOException: java.lang.IllegalStateException: lastEntry = 
> 72856=72856: [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
>         at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
>         at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
>         at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
> [77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
> 7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
> add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
> lastEntry.index >= logIndex = 0
>         at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
>         at 
> org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
>         at 
> org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
>         at 
> org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
>         at 
> org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
>         at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
>         at 
> org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
>         at 
> org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
>         at 
> org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
>         at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
>         at 
> org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
>         at 
> org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
>         at 
> org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
>         at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
>         at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>         at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>         at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>         at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> 2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
>         at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)

  1   2   >