[jira] [Created] (HDFS-13677) Dynamic refresh Disk configuration results in overwriting VolumeMap

2018-06-13 Thread xuzq (JIRA)
xuzq created HDFS-13677:
---

 Summary: Dynamic refresh Disk configuration results in overwriting 
VolumeMap
 Key: HDFS-13677
 URL: https://issues.apache.org/jira/browse/HDFS-13677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: xuzq
 Attachments: 0001-fix-the-bug-of-the-refresh-disk-configuration.patch, 
image-2018-06-14-13-05-54-354.png, image-2018-06-14-13-10-24-032.png

When I added a new disk by dynamically refreshing the configuration, an 
exception "FileNotFound while finding block" was caused.

 

The steps are as follows:

1.Change the hdfs-site.xml of DataNode to add a new disk.

2.Refresh the configuration by "./bin/hdfs dfsadmin -reconfig datanode 
:50020 start"

 

The error is like:

```

VolumeScannerThread(/media/disk5/hdfs/dn): FileNotFound while finding block 
BP-233501496-*.*.*.*-1514185698256:blk_1620868560_547245090 on volume 
/media/disk5/hdfs/dn

org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not 
found for BP-1997955181-*.*.*.*-1514186468560:blk_1090885868_17145082
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:471)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:240)
 at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:553)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
 at java.lang.Thread.run(Thread.java:748)

```

I added some logs for confirmation, as follows:

Log Code like:

!image-2018-06-14-13-05-54-354.png!

And the result is like:

!image-2018-06-14-13-10-24-032.png!  

The Size of 'VolumeMap' has been reduced, and We found the 'VolumeMap' to be 
overridden by the new Disk Block by the method 'ReplicaMap.addAll(ReplicaMap 
other)'.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511975#comment-16511975
 ] 

genericqa commented on HDFS-13675:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
54s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13675 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927750/HDFS-13675.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 85de7853e6c9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7547740 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24435/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24435/testReport/ |
| Max. process+thread count | 4045 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24435/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511972#comment-16511972
 ] 

genericqa commented on HDFS-13673:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
35s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
22s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927752/HDFS-13673.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2b1f6585cade 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7547740 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24436/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24436/testReport/ |
| Max. process+thread count | 3287 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511973#comment-16511973
 ] 

genericqa commented on HDFS-13676:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
34s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927754/HDFS-13676.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 562c14374a6b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7547740 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24437/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24437/testReport/ |
| Max. process+thread count | 3169 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-06-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511966#comment-16511966
 ] 

Chao Sun commented on HDFS-13674:
-

Thanks [~linyiqun]! will appreciate your review :)

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-163) Add Datanode heartbeat dispatcher in SCM

2018-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511957#comment-16511957
 ] 

Hudson commented on HDDS-163:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14426 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14426/])
HDDS-163. Add Datanode heartbeat dispatcher in SCM. Contributed by (aengineer: 
rev ddd09d59f3d9825f068026622720914e04c2e1d6)
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/SCMDatanodeReportHandler.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/SCMDatanodeContainerReportHandler.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/report/package-info.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/SCMDatanodeHeartbeatDispatcher.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/package-info.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/report/TestSCMDatanodeReportHandlerFactory.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/report/TestSCMDatanodeNodeReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMetrics.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/SCMDatanodeReportHandlerFactory.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportManager.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/report/SCMDatanodeNodeReportHandler.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/report/TestSCMDatanodeContainerReportHandler.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/report/TestSCMDatanodeHeartbeatDispatcher.java


> Add Datanode heartbeat dispatcher in SCM
> 
>
> Key: HDDS-163
> URL: https://issues.apache.org/jira/browse/HDDS-163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-163.000.patch
>
>
> Datanode heartbeat to SCM also carries multiple reports which are to be 
> processed by SCM. We need to have a dispatcher in SCM which will hand over 
> the reports to appropriate report handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-163) Add Datanode heartbeat dispatcher in SCM

2018-06-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-163:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~nandakumar131] Thank for the contribution. I have committed this to the trunk.

> Add Datanode heartbeat dispatcher in SCM
> 
>
> Key: HDDS-163
> URL: https://issues.apache.org/jira/browse/HDDS-163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-163.000.patch
>
>
> Datanode heartbeat to SCM also carries multiple reports which are to be 
> processed by SCM. We need to have a dispatcher in SCM which will hand over 
> the reports to appropriate report handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-163) Add Datanode heartbeat dispatcher in SCM

2018-06-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511943#comment-16511943
 ] 

Anu Engineer commented on HDDS-163:
---

+1, the code looks beautiful. Thank you for getting this done.

 

> Add Datanode heartbeat dispatcher in SCM
> 
>
> Key: HDDS-163
> URL: https://issues.apache.org/jira/browse/HDDS-163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-163.000.patch
>
>
> Datanode heartbeat to SCM also carries multiple reports which are to be 
> processed by SCM. We need to have a dispatcher in SCM which will hand over 
> the reports to appropriate report handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-161) Add functionality to queue ContainerClose command from SCM Heartbeat Response to Ratis

2018-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511907#comment-16511907
 ] 

Hudson commented on HDDS-161:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14425/])
HDDS-161. Add functionality to queue ContainerClose command from SCM 
(aengineer: rev 7547740e5c65edaa6c6f8aa1c8debabbdfb0945e)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerSpi.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandDispatcher.java


> Add functionality to queue ContainerClose command from SCM Heartbeat Response 
> to Ratis
> --
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511914#comment-16511914
 ] 

genericqa commented on HDFS-13563:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
21s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
12s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13563 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927745/HDFS-13563.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6ae37bfab7ca 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2299488 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24434/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24434/testReport/ |
| Max. process+thread count | 2854 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24434/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-06-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511904#comment-16511904
 ] 

Yiqun Lin commented on HDFS-13674:
--

Thanks [~csun] for catching this. I'd like to help take the review once you 
attach the patch, :).

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511903#comment-16511903
 ] 

Anu Engineer commented on HDDS-156:
---

+1, Please go ahead and commit this when you get a chance.

One minor comment, It would be good to file a JIRA for this future feature.
{noformat}
private final String storageId;
private final String clusterId;
private final String datanodeUuid;
{noformat}
Storage ID - identifies the storage.
cluster ID - identifies the cluster.
datanode UUID - identifies the data node.

We might want to add 1 more identifiers.
 # ScmGroupID - it identifies the set of SCMs that this datanode talks to, or 
takes commands from if you want we can call this SCM ID.

This value is not same as Cluster ID – since a cluster can technically have 
more than one SCM group.

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch, 
> HDDS-156-HDDS-48.005.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Heartbeat Response to Ratis

2018-06-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-161:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1, I have committed this to trunk. I have verified that we are able to build 
with shading option turned on. I know that shaded client failed in the Jenkins, 
but not able to repo this locally.

[~shashikant] Thank you for the contribution.

> Add functionality to queue ContainerClose command from SCM Heartbeat Response 
> to Ratis
> --
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511893#comment-16511893
 ] 

Konstantin Shvachko commented on HDFS-13608:


+1 for 004 patch.

> [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC
> -
>
> Key: HDFS-13608
> URL: https://issues.apache.org/jira/browse/HDFS-13608
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13608-HDFS-12943.000.patch, 
> HDFS-13608-HDFS-12943.001.patch, HDFS-13608-HDFS-12943.002.patch, 
> HDFS-13608-HDFS-12943.003.patch, HDFS-13608-HDFS-12943.004.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to make the JournalNode-side changes necessary to support 
> serving edits via RPC. This includes interacting with the cache added in 
> HDFS-13607.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Description: 
This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
and Chunk related operations.
 # Changes to current existing Keymanager and ChunkManager are:

 ## Removal of usage of ContainerManager.
 ## Passing container to method calls.
 ## Using layOutversion during reading/deleting chunk files.

Add a new Class KeyValueManager to implement ContainerManager.

 

  was:
This Jira is to add new Interface ContainerManager to perform Key related 
operations

Add a new Class KeyValueManager to implement ContainerManager.

 


> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.00.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Summary: Refactor KeyManager, ChunkManager  (was: Refactor KeyManager and 
KeyManagerImpl)

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.00.patch
>
>
> This Jira is to add new Interface ContainerManager to perform Key related 
> operations
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Description: 
This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
and Chunk related operations.
 # Changes to current existing Keymanager and ChunkManager are:
 ## Removal of usage of ContainerManager.
 ## Passing container to method calls.
 ## Using layOutversion during reading/deleting chunk files.

Add a new Class KeyValueManager to implement ContainerManager.

 

  was:
This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
and Chunk related operations.
 # Changes to current existing Keymanager and ChunkManager are:

 ## Removal of usage of ContainerManager.
 ## Passing container to method calls.
 ## Using layOutversion during reading/deleting chunk files.

Add a new Class KeyValueManager to implement ContainerManager.

 


> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.00.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511889#comment-16511889
 ] 

genericqa commented on HDFS-13608:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
26s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-hdfs in HDFS-12943 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 32m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 32m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 1 
unchanged - 2 fixed = 1 total (was 3) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13608 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927740/HDFS-13608-HDFS-12943.004.patch
 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 2e96fa0046c4 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Attachment: (was: HDDS-160-HDDS-48.00.patch)

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511874#comment-16511874
 ] 

Íñigo Goiri commented on HDFS-13676:


Thanks [~zuzhan] for the patch.
I think the fix is correct.
I don't quite get why those two lines were commented in HDFS-7964.
[~daryn], do you know why those two lines were commented?
It looks like it was removed on the patches after October 2015 but from the 
discussion I cannot get why.


> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13676.000.patch, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511869#comment-16511869
 ] 

genericqa commented on HDDS-156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdds: The patch generated 6 new + 6 
unchanged - 6 fixed = 12 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-156 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927741/HDDS-156-HDDS-48.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cd5eb836c7ac 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 7e228e5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511864#comment-16511864
 ] 

Zuoming Zhang commented on HDFS-13673:
--

[~elgoiri] Uploaded

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511858#comment-16511858
 ] 

Íñigo Goiri commented on HDFS-13673:


bq. This error doesn't seem to be related to mine, any idea?

No, it's not, this is happening in all JIRAs today.

Can you do the var extract I mentioned before?

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13673:
-
Attachment: HDFS-13673.001.patch

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13676:
--

Assignee: Zuoming Zhang

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13673:
--

Assignee: Zuoming Zhang

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
  Attachment: HDFS-13676.000.patch
Target Version/s: 2.9.1, 3.1.0  (was: 3.1.0, 2.9.1)
  Status: Patch Available  (was: Open)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1, 3.1.0
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13676.000.patch, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13675:
--
Description: 
Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
culprits are two tests:

testListOpenFilesNN1DownNN2Down

testSetBalancerBandwidthNN1DownNN2Down

 

that each take 3~ minutes to finish. This is because they both expect to fail 
to connect to 2 namenodes, but the client retry policy has way too many retries 
and exponential backoffs. 

  was:
Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
culprits are two tests:

testListOpenFilesNN1DownNN2Down

 

that each take 3~ minutes to finish. This is because they both expect to fail 
to connect to 2 namenodes, but the client retry policy has way too many retries 
and exponential backoffs. 


> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Heartbeat Response to Ratis

2018-06-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-161:
--
Summary: Add functionality to queue ContainerClose command from SCM 
Heartbeat Response to Ratis  (was: Add functionality to queue ContainerClose 
command from SCM Hearbeat Reposnse to Ratis)

> Add functionality to queue ContainerClose command from SCM Heartbeat Response 
> to Ratis
> --
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Attachment: TestEditLogRace-Report.000.txt

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13676.000.patch, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13673:
-
Attachment: TestNameNodeMetrics-testVolumeFailures-Report.001.txt

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511856#comment-16511856
 ] 

Anbang Hu commented on HDFS-13675:
--

Thanks [~lukmajercak] for the patch. LGTM.

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13675:
--
Status: Patch Available  (was: In Progress)

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511855#comment-16511855
 ] 

Lukas Majercak commented on HDFS-13675:
---

Added patch000 to change the client retry policy configuration for these tests. 
The result is that the whole TestDFSAdminwithHA class finishes under 3 minutes 
on my machine.

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13675:
--
Attachment: HDFS-13675.000.patch

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)
Zuoming Zhang created HDFS-13676:


 Summary: TestEditLogRace fails on Windows
 Key: HDFS-13676
 URL: https://issues.apache.org/jira/browse/HDFS-13676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.1, 3.1.0
Reporter: Zuoming Zhang
 Fix For: 3.1.0, 2.9.1


_TestEditLogRace_ fails on Windows

 

Problem:

When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
directories existing. This is because the _getConf()_ function doesn't specify 
creating any directories.

 

Fix:

Remove the comment for the two lines that config directories to be created.

 

Concern:

Not for sure why it was commented in change 
[https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
 And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13675 started by Lukas Majercak.
-
> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13675:
--
Description: 
Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
culprits are two tests:

testListOpenFilesNN1DownNN2Down

 

that each take 3~ minutes to finish. This is because they both expect to fail 
to connect to 2 namenodes, but the client retry policy has way too many retries 
and exponential backoffs. 

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-13 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-13675:
-

 Summary: Speed up TestDFSAdminWithHA
 Key: HDFS-13675
 URL: https://issues.apache.org/jira/browse/HDFS-13675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Reporter: Lukas Majercak
Assignee: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511843#comment-16511843
 ] 

Zuoming Zhang commented on HDFS-13673:
--

[~elgoiri] This error doesn't seem to be related to mine, any idea?

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-06-13 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13563:
--
Attachment: HDFS-13563.002.patch

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Lukas Majercak
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch, 
> HDFS-13563.002.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-06-13 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511828#comment-16511828
 ] 

Lukas Majercak commented on HDFS-13563:
---

I think we can fix this by changing the retry policy on the client to have less 
number of retries + faster timeout. This also speeds up other tests. Posted 
patch02 with the change.

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Lukas Majercak
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch, 
> HDFS-13563.002.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511790#comment-16511790
 ] 

genericqa commented on HDFS-13673:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
22s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
11s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestFileCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927714/HDFS-13673.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 32fa9ceab2ef 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7566e0e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24432/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24432/testReport/ |
| Max. process+thread count | 3089 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24432/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511781#comment-16511781
 ] 

Hanisha Koneru commented on HDDS-156:
-

Thanks for the review [~bharatviswa].

I fixed the checkstyle issues. But had to refactor 
{{TestRoundRobinVolumeChoosingPolicy}} for that.

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch, 
> HDDS-156-HDDS-48.005.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-156:

Attachment: HDDS-156-HDDS-48.005.patch

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch, 
> HDDS-156-HDDS-48.005.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13624) TestHDFSFileSystemContract#testAppend times out on Windows on the first run

2018-06-13 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu reassigned HDFS-13624:


Assignee: Lukas Majercak

> TestHDFSFileSystemContract#testAppend times out on Windows on the first run
> ---
>
> Key: HDFS-13624
> URL: https://issues.apache.org/jira/browse/HDFS-13624
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Lukas Majercak
>Priority: Minor
>
> Sometimes, TestHDFSFileSystemContract#testAppend takes a long time for the 
> first time in WindowsSelectorImpl$poll0:
> {code:java}
> private native int poll0(long var1, int var3, int[] var4, int[] var5, int[] 
> var6, long var7);
> {code}
> Second run of the test does not time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-06-13 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu reassigned HDFS-13563:


Assignee: Lukas Majercak

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Lukas Majercak
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-13 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511769#comment-16511769
 ] 

Erik Krogen commented on HDFS-13608:


Sure, makes sense [~shv]. Just uploaded v004 patch which removes the one usage 
of Java 8 streams. I'm not really sure what happened with the last Jenkins run 
but it doesn't look related to this patch... Will wait to see if the next run 
has the same issues before diving into them.

> [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC
> -
>
> Key: HDFS-13608
> URL: https://issues.apache.org/jira/browse/HDFS-13608
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13608-HDFS-12943.000.patch, 
> HDFS-13608-HDFS-12943.001.patch, HDFS-13608-HDFS-12943.002.patch, 
> HDFS-13608-HDFS-12943.003.patch, HDFS-13608-HDFS-12943.004.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to make the JournalNode-side changes necessary to support 
> serving edits via RPC. This includes interacting with the cache added in 
> HDFS-13607.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-13 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13608:
---
Attachment: HDFS-13608-HDFS-12943.004.patch

> [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC
> -
>
> Key: HDFS-13608
> URL: https://issues.apache.org/jira/browse/HDFS-13608
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13608-HDFS-12943.000.patch, 
> HDFS-13608-HDFS-12943.001.patch, HDFS-13608-HDFS-12943.002.patch, 
> HDFS-13608-HDFS-12943.003.patch, HDFS-13608-HDFS-12943.004.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to make the JournalNode-side changes necessary to support 
> serving edits via RPC. This includes interacting with the cache added in 
> HDFS-13607.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511765#comment-16511765
 ] 

Bharat Viswanadham commented on HDDS-156:
-

+1 LGTM.( Check style issues need to be fixed)

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13674) Improve documentation on Metrics

2018-06-13 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13674:
---

 Summary: Improve documentation on Metrics
 Key: HDFS-13674
 URL: https://issues.apache.org/jira/browse/HDFS-13674
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation, metrics
Reporter: Chao Sun
Assignee: Chao Sun


There are a few confusing places in the [Hadoop Metrics 
page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
 For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
quantile metrics do not have corresponding entries, description on some 
quantile metrics are not very specific on what is the {{num}} variable in the 
metrics name, etc.

This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511735#comment-16511735
 ] 

genericqa commented on HDDS-161:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 37m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m  
4s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 33m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 33m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 26s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-161 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511722#comment-16511722
 ] 

genericqa commented on HDDS-156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
44s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdds: The patch generated 9 new + 7 
unchanged - 5 fixed = 16 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-156 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927706/HDDS-156-HDDS-48.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 499611e1d15b 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 7e228e5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-161:
-
Status: Patch Available  (was: Open)

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-161:
-
Status: Open  (was: Patch Available)

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511689#comment-16511689
 ] 

Íñigo Goiri commented on HDFS-13673:


bq. What do you mean to extract the variable? I think this is only used once in 
this file?

Just a code style nit to avoid massive one liners:
{code}
File storageDir = new File(dataDir, Storage.STORAGE_DIR_CURRENT);
DataNodeTestUtils.injectDataDirFailure(storageDir);
{code}

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511682#comment-16511682
 ] 

Zuoming Zhang commented on HDFS-13673:
--

Thanks [~elgoiri]

Answers to your questions:
 * Nope. Actually for other places that are calling 
_DataNodeTestUtils.injectDataDirFailure_, they are not calling on the volume 
folder itself. So not affected by _in_use.lock_. I've also checked with all 
other tests that call the _injectDataDirFailure_, and the tests are not failing.
 * What do you mean to extract the variable? I think this is only used once in 
this file?

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511673#comment-16511673
 ] 

Íñigo Goiri commented on HDFS-13673:


The fix looks good.
A couple comments:
* Is there any other test affected by the same with a similar fix?
* Can we extract the variable?

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13673:
-
  Attachment: HDFS-13673.000.patch
Target Version/s: 2.9.1, 3.1.0  (was: 3.1.0, 2.9.1)
  Status: Patch Available  (was: Open)

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1, 3.1.0
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13673:
-
Attachment: TestNameNodeMetrics-testVolumeFailures-Report.000.txt

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-13 Thread Zuoming Zhang (JIRA)
Zuoming Zhang created HDFS-13673:


 Summary: TestNameNodeMetrics fails on Windows
 Key: HDFS-13673
 URL: https://issues.apache.org/jira/browse/HDFS-13673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.1, 3.1.0
Reporter: Zuoming Zhang
 Fix For: 3.1.0, 2.9.1


_TestNameNodeMetrics_ fails on Windows

 

Problem:

This is because in _testVolumeFailures_, it tries to call 
_DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
_injectDataDirFailure_does is actually modifying the folder name from 
_volume_name_ to _volume_name_._origin_ and create a new file named as 
_volume_name_. Inside the folder, it has two things: 1. a directory named as 
"_current_", 2. a file named as "_in_use.lock_". Windows behaves different from 
Linux when renaming the parent folder of a locked file. Windows prevent you 
from renaming while Linux allows.

Fix:

So in order to inject data failure on to the volume. Instead of renaming the 
volume folder itself. Rename the folder inside it which doesn't hold a lock. 
Since the folder inside the volume is "_current_". Then we only need to inject 
data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511633#comment-16511633
 ] 

Hanisha Koneru edited comment on HDDS-156 at 6/13/18 8:47 PM:
--

In Patch v04, fixed review comments, checkstyle and findbug issues, unit test 
failures.

Added format function for volumes. This will be called when we get the 
clusterID from SCM and want to format the volume with this information.


was (Author: hanishakoneru):
In Patch v04, fixed review comments, checkstyle and findbug issues, unit test 
failures.

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-156:

Description: 
This Jira proposes the following:
 # Implement HDDSVolume to encompass VolumeInfo along with other HDDS specific 
fields.
 ** VolumeInfo container disk specific information such as capacity, usage, 
storageType. HddsVolume has hdds specific fields for volume such as 
VolumeState, VolumeStats (will be added later).
 # Write volume level Version file 
 ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
 # Read Version file while instantiating HDDSVolumes.
 ** When the volume Version file already exists (for example, when a DN is 
restarted), then the version file is read for the stored clusterID, 
datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
sanity of the volume.
 ** When a fresh Datanode is started, the Version file is not written to the 
volume uptill the clusterID is received from the SCM.

 

  was:
This Jira proposes the following:
 # Implement HDDSVolume to encompass VolumeInfo along with other HDDS specific 
fields.
 # Read Version file while instantiating HDDSVolumes.
 # Write volume level Version file.

 

[Edit]: Adding below information, which might be useful to add in to the Jira 
description.

 1. DN is restarted (volumes already exist). In this case, the clusterId is 
inferred from version file
 New setup, in this case, we do not know the clusterID till we contact the SCM. 
So the volumes will be in NOT_FORMATTED state and the version file will not be 
written.

 

In case 2, once we get the clusterID from SCM, we need to format the volumes. 
This is not done in the current patch.


> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  ** VolumeInfo container disk specific information such as capacity, usage, 
> storageType. HddsVolume has hdds specific fields for volume such as 
> VolumeState, VolumeStats (will be added later).
>  # Write volume level Version file 
>  ** clusterID, storageID, datanodeUUID, creationTime and layoutVersion.
>  # Read Version file while instantiating HDDSVolumes.
>  ** When the volume Version file already exists (for example, when a DN is 
> restarted), then the version file is read for the stored clusterID, 
> datanodeUuid, lahyoutVersion etc. Some checks will be performed to verify the 
> sanity of the volume.
>  ** When a fresh Datanode is started, the Version file is not written to the 
> volume uptill the clusterID is received from the SCM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-156:

Attachment: HDDS-156-HDDS-48.004.patch

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  # Read Version file while instantiating HDDSVolumes.
>  # Write volume level Version file.
>  
> [Edit]: Adding below information, which might be useful to add in to the Jira 
> description.
>  1. DN is restarted (volumes already exist). In this case, the clusterId is 
> inferred from version file
>  New setup, in this case, we do not know the clusterID till we contact the 
> SCM. So the volumes will be in NOT_FORMATTED state and the version file will 
> not be written.
>  
> In case 2, once we get the clusterID from SCM, we need to format the volumes. 
> This is not done in the current patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-156) Implement HDDSVolume to manage volume state

2018-06-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511633#comment-16511633
 ] 

Hanisha Koneru commented on HDDS-156:
-

In Patch v04, fixed review comments, checkstyle and findbug issues, unit test 
failures.

> Implement HDDSVolume to manage volume state
> ---
>
> Key: HDDS-156
> URL: https://issues.apache.org/jira/browse/HDDS-156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-156-HDDS-48.001.patch, HDDS-156-HDDS-48.002.patch, 
> HDDS-156-HDDS-48.003.patch, HDDS-156-HDDS-48.004.patch
>
>
> This Jira proposes the following:
>  # Implement HDDSVolume to encompass VolumeInfo along with other HDDS 
> specific fields.
>  # Read Version file while instantiating HDDSVolumes.
>  # Write volume level Version file.
>  
> [Edit]: Adding below information, which might be useful to add in to the Jira 
> description.
>  1. DN is restarted (volumes already exist). In this case, the clusterId is 
> inferred from version file
>  New setup, in this case, we do not know the clusterID till we contact the 
> SCM. So the volumes will be in NOT_FORMATTED state and the version file will 
> not be written.
>  
> In case 2, once we get the clusterID from SCM, we need to format the volumes. 
> This is not done in the current patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511612#comment-16511612
 ] 

Chao Sun commented on HDFS-12976:
-

Hmm.. there are some test failures that are related. Will fix and submit a new 
patch.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511585#comment-16511585
 ] 

genericqa commented on HDFS-12976:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
40s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 25s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}251m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
|   | hadoop.hdfs.TestDFSClientFailover |
|   | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSUtil |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12976 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927586/HDFS-12976-HDFS-12943.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc 

[jira] [Commented] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511580#comment-16511580
 ] 

Shashikant Banerjee commented on HDDS-161:
--

Thanks [~anu], for the review. patch v2 fixes unused imports.

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-161:
-
Attachment: HDDS-161.02.patch

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch, HDDS-161.02.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511573#comment-16511573
 ] 

Anu Engineer commented on HDDS-161:
---

+1, on this change. I will commit this as soon as we can fix some of these 
unused imports in the following files.
{noformat}

XceiverServer.java
XceiverServerGrpc.java
XceiverServerRatis.java
XceiverServerSPI.java{noformat}

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-159) RestClient: Implement list operations for volume, bucket and keys

2018-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511564#comment-16511564
 ] 

Hudson commented on HDDS-159:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14422 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14422/])
HDDS-159. RestClient: Implement list operations for volume, bucket and (xyao: 
rev 7566e0ec5f1aff4cf3c53f4ccc5f3b57fff1e216)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBucketsRatis.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (edit) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-shell.robot
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java


> RestClient: Implement list operations for volume, bucket and keys
> -
>
> Key: HDDS-159
> URL: https://issues.apache.org/jira/browse/HDDS-159
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-159.001.patch, HDDS-159.002.patch
>
>
> Currently RestClient does not provide implementation for list volume, list 
> buckets and list keys. This Jira aims to add the implementation and necessary 
> tests for the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511555#comment-16511555
 ] 

genericqa commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
0s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
30s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
43s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927689/HDFS-12284.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 595950534d40 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43baa03 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24431/testReport/ |
| Max. process+thread count | 954 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24431/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support for Kerberos 

[jira] [Updated] (HDDS-159) RestClient: Implement list operations for volume, bucket and keys

2018-06-13 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-159:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ljain] for the contribution. I've committed the patch to the trunk. 

> RestClient: Implement list operations for volume, bucket and keys
> -
>
> Key: HDDS-159
> URL: https://issues.apache.org/jira/browse/HDDS-159
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-159.001.patch, HDDS-159.002.patch
>
>
> Currently RestClient does not provide implementation for list volume, list 
> buckets and list keys. This Jira aims to add the implementation and necessary 
> tests for the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-06-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511487#comment-16511487
 ] 

Wei-Chiu Chuang commented on HDFS-13174:


[~pifta] I'm really really sorry about missing this: would you please also take 
the time to fix the javac warning?
{quote}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java:[1598,30]
 [deprecation] DFS_CLIENT_SOCKET_TIMEOUT_KEY in DFSConfigKeys has been 
deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestMover.java:[705,30]
 [deprecation] DFS_CLIENT_SOCKET_TIMEOUT_KEY in DFSConfigKeys has been 
deprecated
{quote}

> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Attachments: HDFS-13174.001.patch, HDFS-13174.002.patch, 
> HDFS-13174.003.patch, HDFS-13174.004.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-06-13 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Attachment: HDFS-12284.003.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-109) Add reconnect logic for XceiverClientGrpc

2018-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511446#comment-16511446
 ] 

Hudson commented on HDDS-109:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14421 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14421/])
HDDS-109. Add reconnect logic for XceiverClientGrpc. Contributed by (aengineer: 
rev 43baa036aeb025bcbed1aca19837b072f2c14a6a)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java


> Add reconnect logic for XceiverClientGrpc
> -
>
> Key: HDDS-109
> URL: https://issues.apache.org/jira/browse/HDDS-109
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-109.001.patch, HDDS-109.003.patch, 
> HDDS-109.004.patch
>
>
> We need to add reconnect logic in XceiverClientGrpc which allows it to 
> reconnect in case of DN restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-109) Add reconnect logic for XceiverClientGrpc

2018-06-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-109:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~msingh] Thanks for the reviews. [~ljain] Thanks for the contribution. I have 
committed this to the trunk.

> Add reconnect logic for XceiverClientGrpc
> -
>
> Key: HDDS-109
> URL: https://issues.apache.org/jira/browse/HDDS-109
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-109.001.patch, HDDS-109.003.patch, 
> HDDS-109.004.patch
>
>
> We need to add reconnect logic in XceiverClientGrpc which allows it to 
> reconnect in case of DN restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-109) Add reconnect logic for XceiverClientGrpc

2018-06-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511406#comment-16511406
 ] 

Anu Engineer commented on HDDS-109:
---

+1, Thank you for taking care of that. I will commit shortly.

> Add reconnect logic for XceiverClientGrpc
> -
>
> Key: HDDS-109
> URL: https://issues.apache.org/jira/browse/HDDS-109
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-109.001.patch, HDDS-109.003.patch, 
> HDDS-109.004.patch
>
>
> We need to add reconnect logic in XceiverClientGrpc which allows it to 
> reconnect in case of DN restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2018-06-13 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511402#comment-16511402
 ] 

Daryn Sharp commented on HDFS-13671:


I'm not surprised. I fully expected having to revert when we start scale 
testing 3.x.

The folded tree jira has micro-benchmarks demonstrating a 4x degradation in 
performance. Micro-benchmarks tend to exaggerate performance differences, so 
let's see how that measures up in the real world.

Baseline is a large delete from last month on a 2.8 cluster:
 * 29.9M blocks; 87 seconds = 344k blocks/sec

Let's analyze permutations of the reported numbers:
 * 14M blocks; 3 min = 78k blocks/sec = 4X slower
 * 12M blocks; 3 min = 67k blocks/sec = 5X slower
 * 14M blocks; 6 min = 39k blocks/sec = 9X slower

It appears it's _at least as bad_ as predicted by the micro-benchmarks.  The 
tree traversal, the internal node copies, and the rebalancing aren't cheap.

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Priority: Major
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> since It will take additional time to balance tree node. When there are large 
> block to be removed/deleted, it looks bad.
> For the get type operations in {{DatanodeStorageInfo}}, we only provide the 
> {{getBlockIterator}} to return blocks iterator and no other get operation 
> with specified block. Still we need to use {{FoldedTreeSet}} in 
> {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not 
> Update. Maybe we can revert this to the early implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511293#comment-16511293
 ] 

Chao Sun commented on HDFS-12976:
-

Oops, my bad. Thanks [~xkrogen]!

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-13 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12976:

Status: Patch Available  (was: In Progress)

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-06-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511291#comment-16511291
 ] 

Ajay Kumar commented on HDDS-8:
---

patch v8 to fix failed test in TestOzoneManagerDelegationToken.

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13621) Upgrade common-langs version to 3.7 in hadoop-hdfs-project

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511284#comment-16511284
 ] 

genericqa commented on HDFS-13621:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m  
2s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 36s{color} 
| {color:red} hadoop-hdfs-project generated 25 new + 581 unchanged - 0 fixed = 
606 total (was 581) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
36s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13621 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-13 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511253#comment-16511253
 ] 

Erik Krogen commented on HDFS-12976:


[~csun], you need to put the JIRA into patch available state for Jenkins to run.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-163) Add Datanode heartbeat dispatcher in SCM

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1654#comment-1654
 ] 

genericqa commented on HDDS-163:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m  
6s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
32s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m  2s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927632/HDDS-163.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511089#comment-16511089
 ] 

Shashikant Banerjee commented on HDDS-161:
--

As per the offline discussion with [~anu], storing the replication type info in 
the containerdata in the datanodes is not required. In patch v1, while sending 
the CloseCommand in the hearbeart response to datanode, the replication type 
info is also sent so that, the closeContainer command can be queued to the 
appropriate Xeciver server on the Datanode to handle it.

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-161) Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse to Ratis

2018-06-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-161:
-
Attachment: HDDS-161.01.patch

> Add functionality to queue ContainerClose command from SCM Hearbeat Reposnse 
> to Ratis
> -
>
> Key: HDDS-161
> URL: https://issues.apache.org/jira/browse/HDDS-161
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-161.00.patch, HDDS-161.01.patch
>
>
> When a container needs to be closed at the Datanode, SCM will queue a close 
> command which will be encoded as a part of Heartbeat Response to the 
> Datanode. This command will be picked up from the response at the Datanode 
> which will then be submitted to the XceiverServer to process the close 
> command. This will just queue a ContainerCloseCommand to the Ratis, where the 
> leader would start the transaction while the followers will reject the 
> closeContainer request.
> While handling the close container inside the Datanode, we need to ensure all 
> the ongoing chunkWrites finish before close can proceed through. It should 
> also reject any any incoming I/Os in between. This will be handled as a part 
> of separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-162) DataNode Container reads/Writes should be disallowed for open containers if the replication type mismatches

2018-06-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-162.
--
Resolution: Not A Problem

> DataNode Container reads/Writes should be disallowed for open containers if 
> the replication type mismatches
> ---
>
> Key: HDDS-162
> URL: https://issues.apache.org/jira/browse/HDDS-162
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> In Ozone, container can be created via ratis or Standalone protocol. However, 
> the reads/.writes on the containers on datanodes can be done through either 
> of these if the container location is known.  A case may arise where data is 
> being written into container via Ratis i.e, the container is in open State on 
> the Datanodes and read via Standalone. This should not be allowed as if the 
> read from the follower Datanodes in Ratis via Standalone Protocol might 
> result in giving stale data. Once the container is closed on the datanode, 
> data can be read via either of the protocols.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13641) Add metrics for edit log tailing

2018-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511047#comment-16511047
 ] 

Hudson commented on HDFS-13641:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14420 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14420/])
HDFS-13641. Add metrics for edit log tailing. Contributed by Chao Sun. (yqlin: 
rev 8e7548d33be9c4874daab18b2e774bdc2ed216d3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MetricsAsserts.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java


> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch, HDFS-13641.003.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13641) Add metrics for edit log tailing

2018-06-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511031#comment-16511031
 ] 

Yiqun Lin edited comment on HDFS-13641 at 6/13/18 12:28 PM:


LGTM, +1.
Committed this to trunk, branch-3.1 and branch-3.0. As version 3.0.3 is 
released, I have updated target version to 3.0.4. Thanks [~csun] for the 
contribution and thanks [~xkrogen] for the review!


was (Author: linyiqun):
LGTM, +1.
Committed this to trunk, branch-3.1 and branch-3. As version 3.0.3 is released, 
I have updated target version to 3.0.4. Thanks [~csun] for the contribution and 
thanks [~xkrogen] for the review!

> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch, HDFS-13641.003.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13641) Add metrics for edit log tailing

2018-06-13 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13641:
-
Affects Version/s: 3.0.3

> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch, HDFS-13641.003.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13641) Add metrics for edit log tailing

2018-06-13 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13641:
-
Target Version/s: 3.0.4  (was: 3.0.3)

> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch, HDFS-13641.003.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13641) Add metrics for edit log tailing

2018-06-13 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13641:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

LGTM, +1.
Committed this to trunk, branch-3.1 and branch-3. As version 3.0.3 is released, 
I have updated target version to 3.0.4. Thanks [~csun] for the contribution and 
thanks [~xkrogen] for the review!

> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch, HDFS-13641.003.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13621) Upgrade common-langs version to 3.7 in hadoop-hdfs-project

2018-06-13 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510981#comment-16510981
 ] 

Takanobu Asanuma commented on HDFS-13621:
-

Uploaded the 1st patch. I've confirmed that \{{mvn clean package -Pdist,native 
-Dtar -DskipTests}} succeeds with it.

> Upgrade common-langs version to 3.7 in hadoop-hdfs-project
> --
>
> Key: HDFS-13621
> URL: https://issues.apache.org/jira/browse/HDFS-13621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13621.1.patch
>
>
> commons-lang 2.6 is widely used. Let's upgrade to 3.6.
> This jira is separated from HADOOP-10783.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13621) Upgrade common-langs version to 3.7 in hadoop-hdfs-project

2018-06-13 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13621:

Status: Patch Available  (was: Open)

> Upgrade common-langs version to 3.7 in hadoop-hdfs-project
> --
>
> Key: HDFS-13621
> URL: https://issues.apache.org/jira/browse/HDFS-13621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13621.1.patch
>
>
> commons-lang 2.6 is widely used. Let's upgrade to 3.6.
> This jira is separated from HADOOP-10783.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13621) Upgrade common-langs version to 3.7 in hadoop-hdfs-project

2018-06-13 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13621:

Attachment: HDFS-13621.1.patch

> Upgrade common-langs version to 3.7 in hadoop-hdfs-project
> --
>
> Key: HDFS-13621
> URL: https://issues.apache.org/jira/browse/HDFS-13621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13621.1.patch
>
>
> commons-lang 2.6 is widely used. Let's upgrade to 3.6.
> This jira is separated from HADOOP-10783.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-06-13 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HDFS-13596:
-
Priority: Blocker  (was: Critical)

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-06-13 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510975#comment-16510975
 ] 

Rohith Sharma K S commented on HDFS-13596:
--

Thanks [~hanishakoneru] for reporting this. I do see this error while rolling 
upgrade testing Hadoop cluster. Especially creating a file while rolling 
upgrade is in progress and finalizing the upgrade on upgraded version. This 
blocks rolling upgrade functionality. 

I am bumping the JIRA to blocker as it blocks rolling upgrade.


> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Priority: Critical
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> 

[jira] [Commented] (HDDS-109) Add reconnect logic for XceiverClientGrpc

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510960#comment-16510960
 ] 

genericqa commented on HDDS-109:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
42s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
16s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  6s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927612/HDDS-109.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0b5c9400a197 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |

[jira] [Commented] (HDDS-159) RestClient: Implement list operations for volume, bucket and keys

2018-06-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510958#comment-16510958
 ] 

genericqa commented on HDDS-159:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
23s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m  3s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-159 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927624/HDDS-159.002.patch |
| Optional Tests |  

  1   2   >