[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Status: Patch Available  (was: Open)

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Attachment: HDDS-175.00.patch

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2018-06-15 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514638#comment-16514638
 ] 

Yiqun Lin commented on HDFS-13671:
--

Thanks for the comments, everyone!
[~mi...@cloudera.com], we  did the GC check and are sure there is no GC problem 
when NN hung. And we looked into the NN log, that explicitly indicated that NN 
was doing the remove block operation. It lasted around 6 minutes(from 
15:01~15:07).
{noformat}
2018-06-06 15:00:59,873 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1593304672_519567210 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
2018-06-06 15:00:59,875 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1593304675_519567213 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
2018-06-06 15:00:59,879 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1593304678_519567216 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
2018-06-06 15:00:59,882 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1593304679_519567217 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
.
2018-06-06 15:07:00,004 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1595774272_522036817 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
2018-06-06 15:07:00,005 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1595774270_522036815 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:50010
2018-06-06 15:07:00,007 INFO [IPC Server handler 163 on 8020] BlockStateChange: 
BLOCK* addToInvalidates: blk_1595774256_522036801 xx.xx.xx.xx:50010 
xx.xx.xx.xx:50010 xx.xx.xx.xx:500
{noformat}

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Priority: Major
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> 

[jira] [Commented] (HDFS-13687) ConfiguredFailoverProxyProvider could direct requests to SBN

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514607#comment-16514607
 ] 

genericqa commented on HDFS-13687:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
43s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928049/HDFS-13687.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f7a4df583c34 

[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514606#comment-16514606
 ] 

genericqa commented on HDFS-13265:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
42s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
49s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13265 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928055/HDFS-13265.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6f7892ed0090 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-172) The numbers of operation should be integer in KSM UI

2018-06-15 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514591#comment-16514591
 ] 

Takanobu Asanuma commented on HDDS-172:
---

Thanks for reviewing and committing it, [~anu]!

> The numbers of operation should be integer in KSM UI
> 
>
> Key: HDDS-172
> URL: https://issues.apache.org/jira/browse/HDDS-172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-172.1.patch, after.png, before.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514560#comment-16514560
 ] 

Hudson commented on HDFS-13681:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14441/])
HDFS-13681. Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test (inigoiri: 
rev 8762e9cf10fa100dd5f7fd695f5e52b75a94c5d4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java


> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514542#comment-16514542
 ] 

genericqa commented on HDFS-13186:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  6m 
37s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
22s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13186 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13681:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~surmountian] for the patch.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-15 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514540#comment-16514540
 ] 

Konstantin Shvachko commented on HDFS-13609:


Looks like shadedclient failures are also in the trunk. I wouldn't worry about 
it here.
 # Looking at the patch I see that a lot of changes are related to adding new 
parameter {{boolean optimizeLatency}} into 
{{LogsPurgeable.selectInputStreams()}}, which in turn affected 
{{JournalManager}} interface.
 The parameter is actively used only in {{QuorumJournalManager}}. In all other 
implementations it is ignored. In {{QuorumJournalManager.selectInputStreams()}} 
implementation you require {{optimizeLatency}} to be the same as 
{{inProgressOk}} except when {{optimizeLatency == false && nProgressOk == 
true}}. But in the latter case {{optimizeLatency}} is ignored. So my main 
question is can we simply use {{inProgressOk}} as an indicator to optimize for 
latency and drop the {{optimizeLatency}} parameter? This should simplify 
changes a lot.
 # In {{hdfs-default.xml}} rephrase "This will also enable tailing of edit logs 
via" -> "This enables tailing of edit logs via". Like that you clarify it.
 # Should {{dfs.ha.tail-edits.qjm.rpc.max-txns}} be a public or an undocumented 
config parameter? I see there is a bunch of "Change with caution" properties in 
{{hdfs-default.xml}}. This is exactly why we keep them undocumented.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514535#comment-16514535
 ] 

Bharat Viswanadham commented on HDDS-169:
-

[~hanishakoneru] Addressed review comments from Jira HDDS-160 in this.

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-169-HDDS-48.00.patch
>
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514533#comment-16514533
 ] 

Íñigo Goiri commented on HDFS-13681:


The unit test passed 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24459/testReport/org.apache.hadoop.hdfs.server.namenode/TestStartup/].
The failed unit tests are unrelated and shadedclient has been failing for a 
couple days.
+1 on  [^HDFS-13681.001.patch].
Committing.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514523#comment-16514523
 ] 

genericqa commented on HDFS-13681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
24s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
12s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928044/HDFS-13681.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5455918776a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 308a159 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24459/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24459/testReport/ |
| Max. process+thread count | 3272 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514490#comment-16514490
 ] 

Hudson commented on HDFS-13686:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14440 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14440/])
HDFS-13686. Add overall metrics for FSNamesystemLock. Contributed by (inigoiri: 
rev d31a3ce767d3bb68bdbb4f36d45600eab9f4f8b7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemLock.java


> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-176:
---
Fix Version/s: 0.2.1

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13686:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~lukmajercak] for the patch and [~xkrogen] for the review.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514475#comment-16514475
 ] 

genericqa commented on HDDS-169:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
31s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
0s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
21s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928052/HDDS-169-HDDS-48.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d242107b9148 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / ca192cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/320/testReport/ |
| Max. process+thread count | 217 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/320/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed 

[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514471#comment-16514471
 ] 

Íñigo Goiri commented on HDFS-13686:


Thanks for the review [~xkrogen].

The unit tests passed 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24457/testReport/org.apache.hadoop.hdfs.server.namenode/TestFSNamesystemLock/],
 the failed unit tests are unrelated (the usual suspects), and the shadedclient 
has been broken for a couple days.
+1 on  [^HDFS-13686.000.patch].
Committing.

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-176:
---

 Summary: Add keyCount and container maximum size to ContainerData
 Key: HDDS-176
 URL: https://issues.apache.org/jira/browse/HDDS-176
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


# ContainerData, should hold container maximum size, and this should be 
serialized into .container file. This is needed because after some time, 
container size can be changed. So, old containers will have different max size 
than the newly created containers.
 # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-176:
---

Assignee: Bharat Viswanadham

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514468#comment-16514468
 ] 

Lukas Majercak commented on HDFS-13686:
---

Thanks [~xkrogen]. HDFS-11021 looks helpful, I thought about it too. I can 
definitely take a look at it.

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-06-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514467#comment-16514467
 ] 

Erik Krogen commented on HDFS-13265:


Just attached v002 patch incorporating [~elgoiri]'s suggestions. Will let 
Jenkins run to see if the set of failing tests is significantly higher with 
this broader scope of change.

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265.000.patch, HDFS-13265.001.patch, 
> HDFS-13265.002.patch, TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-06-15 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13265:
---
Attachment: HDFS-13265.002.patch

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265.000.patch, HDFS-13265.001.patch, 
> HDFS-13265.002.patch, TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514455#comment-16514455
 ] 

Erik Krogen commented on HDFS-13686:


Nice one, thanks for the work [~lukmajercak]! I have thought about this as 
well. Patch LGTM. 

If you're interested in improving these metrics further, I think HDFS-11021 
will be a big win. The OTHER category can become a very significant 
contributing factor to NN lock time and having a more detailed breakdown of 
which operations are causing that would be very helpful.

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13582) Improve backward compatibility for HDFS-13176 (WebHdfs file path gets truncated when having semicolon (;) inside)

2018-06-15 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-13582:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed your branch-2 patch. Thanks, [~zvenczel]!

> Improve backward compatibility for HDFS-13176 (WebHdfs file path gets 
> truncated when having semicolon (;) inside)
> -
>
> Key: HDFS-13582
> URL: https://issues.apache.org/jira/browse/HDFS-13582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13582-branch-2.01.patch, HDFS-13582.01.patch, 
> HDFS-13582.02.patch
>
>
> Encode special character only if necessary in order to improve backward 
> compatibility in the following scenario:
> new (having HDFS-13176) WebHdfs client - > old (not having HDFS-13176) 
> WebHdfs server 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Attachment: HDDS-169-HDDS-48.00.patch

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-169-HDDS-48.00.patch
>
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Status: Patch Available  (was: In Progress)

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-169-HDDS-48.00.patch
>
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Description: 
Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
fields to ContainerInfo to recreate pipeline if required:
# pipelineId
# replication type
# expected replication count
# DataNode where its replica exist

  was:
Pipeline class currently differs from the pipelineChannel with the data field, 
this field was introduced with HDFS-8 to maintain per container local data. 
However, this data field can be moved to the ContainerInfo class and then the 
pipelineChannel can be used interchangeably with pipeline everywhere. This will 
help with making code being cleaner.

 


> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-175:
---

Assignee: Ajay Kumar  (was: Shashikant Banerjee)

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-15 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-175:
---

 Summary: Refactor ContainerInfo to remove Pipeline object from it 
 Key: HDDS-175
 URL: https://issues.apache.org/jira/browse/HDDS-175
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Ajay Kumar
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


Pipeline class currently differs from the pipelineChannel with the data field, 
this field was introduced with HDFS-8 to maintain per container local data. 
However, this data field can be moved to the ContainerInfo class and then the 
pipelineChannel can be used interchangeably with pipeline everywhere. This will 
help with making code being cleaner.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13687) ConfiguredFailoverProxyProvider could direct requests to SBN

2018-06-15 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13687:

Status: Patch Available  (was: Open)

> ConfiguredFailoverProxyProvider could direct requests to SBN
> 
>
> Key: HDFS-13687
> URL: https://issues.apache.org/jira/browse/HDFS-13687
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13687.000.patch
>
>
> In case there are multiple SBNs, and {{dfs.ha.allow.stale.reads}} is set to 
> true, failover could go to a SBN which then may serve read requests from 
> client. This may not be the expected behavior. This issue arises when we are 
> working on HDFS-12943 and HDFS-12976.
> A better approach for this could be to check {{HAServiceState}} and find out 
> the active NN when performing failover. This also can reduce the # of 
> failovers the client has to do in case of multiple SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13687) ConfiguredFailoverProxyProvider could direct requests to SBN

2018-06-15 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13687:

Attachment: HDFS-13687.000.patch

> ConfiguredFailoverProxyProvider could direct requests to SBN
> 
>
> Key: HDFS-13687
> URL: https://issues.apache.org/jira/browse/HDFS-13687
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13687.000.patch
>
>
> In case there are multiple SBNs, and {{dfs.ha.allow.stale.reads}} is set to 
> true, failover could go to a SBN which then may serve read requests from 
> client. This may not be the expected behavior. This issue arises when we are 
> working on HDFS-12943 and HDFS-12976.
> A better approach for this could be to check {{HAServiceState}} and find out 
> the active NN when performing failover. This also can reduce the # of 
> failovers the client has to do in case of multiple SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review [~hanishakoneru]

I have committed this to HDDS-48.

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514381#comment-16514381
 ] 

Íñigo Goiri commented on HDFS-13681:


For the record, this fails on the [Windows daily 
build|https://builds.apache.org/job/hadoop-trunk-win/498/testReport/org.apache.hadoop.hdfs.server.namenode/TestStartup/testNNFailToStartOnReadOnlyNNDir/].

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-172) The numbers of operation should be integer in KSM UI

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514378#comment-16514378
 ] 

Hudson commented on HDDS-172:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14438 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14438/])
HDDS-172. The numbers of operation should be integer in KSM UI. (aengineer: rev 
308a1591f9f41597f4e7cc17bca06c66d6efc0a2)
* (edit) hadoop-ozone/ozone-manager/src/main/webapps/ksm/ksm.js


> The numbers of operation should be integer in KSM UI
> 
>
> Key: HDDS-172
> URL: https://issues.apache.org/jira/browse/HDDS-172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-172.1.patch, after.png, before.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-15 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514376#comment-16514376
 ] 

Hanisha Koneru commented on HDDS-160:
-

Ok sure. Go ahead. +1.

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514375#comment-16514375
 ] 

Bharat Viswanadham edited comment on HDDS-160 at 6/15/18 9:28 PM:
--

Hi [~hanishakoneru]

Thanks for the review.

The test code is being changed again in HDDS-169. I will address them when 
posting a patch for it.

If you are okay with it, I will commit this change. 


was (Author: bharatviswa):
Hi [~hanishakoneru]

Thanks for the review.

The test code is being changed again in HDDS-169. I will address them when 
posting a patch for it.

 

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514375#comment-16514375
 ] 

Bharat Viswanadham commented on HDDS-160:
-

Hi [~hanishakoneru]

Thanks for the review.

The test code is being changed again in HDDS-169. I will address them when 
posting a patch for it.

 

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514367#comment-16514367
 ] 

Xiao Liang commented on HDFS-13681:
---

Sure, thank you [~elgoiri] for help reviewing. I uploaded 
[^HDFS-13681.001.patch] with the update.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Attachment: HDFS-13681.001.patch

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-172) The numbers of operation should be integer in KSM UI

2018-06-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-172:
--
Fix Version/s: 0.2.1

> The numbers of operation should be integer in KSM UI
> 
>
> Key: HDDS-172
> URL: https://issues.apache.org/jira/browse/HDDS-172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-172.1.patch, after.png, before.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-172) The numbers of operation should be integer in KSM UI

2018-06-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-172:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~tasanuma0829] Thank for making ozone look better. I have committed this patch 
to trunk. I have also verified that all acceptance tests pass.

> The numbers of operation should be integer in KSM UI
> 
>
> Key: HDDS-172
> URL: https://issues.apache.org/jira/browse/HDDS-172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-172.1.patch, after.png, before.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-172) The numbers of operation should be integer in KSM UI

2018-06-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514365#comment-16514365
 ] 

Anu Engineer edited comment on HDDS-172 at 6/15/18 9:11 PM:


[~tasanuma0829] Thank you for making ozone look better. I have committed this 
patch to trunk. I have also verified that all acceptance tests pass.


was (Author: anu):
[~tasanuma0829] Thank for making ozone look better. I have committed this patch 
to trunk. I have also verified that all acceptance tests pass.

> The numbers of operation should be integer in KSM UI
> 
>
> Key: HDDS-172
> URL: https://issues.apache.org/jira/browse/HDDS-172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-172.1.patch, after.png, before.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514356#comment-16514356
 ] 

Hudson commented on HDFS-13174:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14437 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14437/])
HDFS-13174. hdfs mover -p /path times out after 20 min. Contributed by 
(weichiu: rev c966a3837af1c1a1c4a441f491b0d76d5c9e5d78)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestMover.java


> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13174.001.patch, HDFS-13174.002.patch, 
> HDFS-13174.003.patch, HDFS-13174.004.patch, HDFS-13174.005.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514357#comment-16514357
 ] 

Íñigo Goiri commented on HDFS-13686:


[~xkrogen], you added the metrics for the locks.
Do you mind reviewing?

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514354#comment-16514354
 ] 

genericqa commented on HDDS-167:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m  
3s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
31s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-dist hadoop-ozone/acceptance-test . 
hadoop-ozone/docs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 41s{color} 
| {color:red} root generated 12 new + 1536 unchanged - 12 fixed = 1548 total 
(was 1548) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
17s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-dist hadoop-ozone/acceptance-test . 
hadoop-ozone/docs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-ozone/ozone-manager generated 2 new + 0 
unchanged - 1 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 47s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate 

[jira] [Commented] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514349#comment-16514349
 ] 

genericqa commented on HDFS-13686:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
3s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
27s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928019/HDFS-13686.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 617485b2faf0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43d994e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24457/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24457/testReport/ |
| Max. process+thread count | 2945 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24457/console 

[jira] [Commented] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-15 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514347#comment-16514347
 ] 

Hanisha Koneru commented on HDDS-160:
-

Thanks for the patch [~bharatviswa].

Few minor comments:
# testWriteChunkStageWriteAndCommit -> Can we check that the WRITE_DATA creates 
a temporary file and COMMIT_DATA renames it to final file.
# testWriteChunkStageCombinedData -> Line 159-160, COMBINED_DATA doesn’t create 
a temporary file.
# testWriteChunkChecksumMismatch -> chunkList is not used.
# testDeleteKey -> after deleteKey, can we versify that the key is deleted by 
calling getKey.

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-06-15 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13174:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Istvan for the contribution!

> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13174.001.patch, HDFS-13174.002.patch, 
> HDFS-13174.003.patch, HDFS-13174.004.patch, HDFS-13174.005.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-06-15 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13174:
---
Fix Version/s: 3.0.4
   3.1.1
   3.2.0

> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13174.001.patch, HDFS-13174.002.patch, 
> HDFS-13174.003.patch, HDFS-13174.004.patch, HDFS-13174.005.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Attachment: (was: HDDS-169-HDDS-48.00.patch)

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514330#comment-16514330
 ] 

Hudson commented on HDFS-13676:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14436/])
HDFS-13676. TestEditLogRace fails on Windows. Contributed by Zuoming (inigoiri: 
rev eebeb6033fd09791fcbff626f128a98e393f0a88)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogRace.java


> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-06-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514331#comment-16514331
 ] 

Wei-Chiu Chuang commented on HDFS-13174:


Thanks [~pifta] for the insight. Here's the relevant code:
{code:title=TestBalancer#testMaxIterationTime}
// set client socket timeout to have an IN_PROGRESS notification back from
// the DataNode about the copy in every second.
conf.setLong(DFSConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY, 2000L);
{code}
and 
{code:title=BlockReceiver#(constructor)}
  // For replaceBlock() calls response should be sent to avoid socketTimeout
  // at clients. So sending with the interval of 0.5 * socketTimeout
  final long readTimeout = datanode.getDnConf().socketTimeout;
  this.responseInterval = (long) (readTimeout * 0.5);
{code}

Patch v4 makes sense to me +1. Patch v5 actually failed shaded client build, 
most likely because of the dependency.


> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Attachments: HDFS-13174.001.patch, HDFS-13174.002.patch, 
> HDFS-13174.003.patch, HDFS-13174.004.patch, HDFS-13174.005.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514322#comment-16514322
 ] 

Bharat Viswanadham commented on HDDS-169:
-

This patch is dependant on HDDS-160.

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-169-HDDS-48.00.patch
>
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Attachment: HDDS-169-HDDS-48.00.patch

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-169-HDDS-48.00.patch
>
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13676:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.4
  2.9.2
  3.1.1
  3.2.0
  2.10.0
Target Version/s: 2.9.1, 3.1.0  (was: 3.1.0, 2.9.1)
  Status: Resolved  (was: Patch Available)

Thanks [~zuzhan] for the patch.
Committed to [^HDFS-13676.001.patch] to trunk, branch-3.1, and branch-3.0 and  
[^HDFS-13676-branch-2.000.patch] to branch-2 and branch-2.9.

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-06-15 Thread Chris Douglas (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514298#comment-16514298
 ] 

Chris Douglas commented on HDFS-13186:
--

v010 just fixes javadoc errors and added more param javadoc for 
{{MultipartUploader}}. I'm +1 on this and will commit if jenkins comes back 
clean.

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-15 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514294#comment-16514294
 ] 

Xiao Chen commented on HDFS-13682:
--

Test failures doesn't look related. [~daryn] / [~jojochuang], do you have 
cycles to review?

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-174) Shell error messages are often cryptic

2018-06-15 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HDDS-174:


Assignee: Nanda kumar

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Critical
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514292#comment-16514292
 ] 

genericqa commented on HDFS-13682:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m  
6s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}232m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928006/HDFS-13682.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 477af303e894 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3e37a9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514288#comment-16514288
 ] 

Íñigo Goiri commented on HDFS-13676:


Thanks [~daryn], committing then.

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2018-06-15 Thread Misha Dmitriev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514289#comment-16514289
 ] 

Misha Dmitriev commented on HDFS-13671:
---

[~linyiqun] did you check how much time NN was spending in GC when it was hung? 
That is, it would be good to verify that the problem is indeed with the NN code 
running some suboptimal operations for long time, and not with the JVM itself 
that busily collects the heap. Of course, if you have a relatively small heap 
(up to 3..5GB), GC is unlikely to take much time anyway. But with bigger heaps, 
it may become a factor to consider.

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Priority: Major
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> since It will take additional time to balance tree node. When there are large 
> block to be removed/deleted, it looks bad.
> For the get type operations in {{DatanodeStorageInfo}}, we only provide the 
> {{getBlockIterator}} to return blocks iterator and no other get operation 
> with specified block. Still we need to use {{FoldedTreeSet}} in 
> {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not 
> Update. Maybe we can revert this to the early implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-06-15 Thread Chris Douglas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-13186:
-
Attachment: HDFS-13186.010.patch

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13687) ConfiguredFailoverProxyProvider could direct requests to SBN

2018-06-15 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13687:
---

 Summary: ConfiguredFailoverProxyProvider could direct requests to 
SBN
 Key: HDFS-13687
 URL: https://issues.apache.org/jira/browse/HDFS-13687
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chao Sun
Assignee: Chao Sun


In case there are multiple SBNs, and {{dfs.ha.allow.stale.reads}} is set to 
true, failover could go to a SBN which then may serve read requests from 
client. This may not be the expected behavior. This issue arises when we are 
working on HDFS-12943 and HDFS-12976.

A better approach for this could be to check {{HAServiceState}} and find out 
the active NN when performing failover. This also can reduce the # of failovers 
the client has to do in case of multiple SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514232#comment-16514232
 ] 

Daryn Sharp commented on HDFS-13676:


Hmm. I don’t intentionally submit patches with commented out code so it was 
likely an accident. 

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-174) Shell error messages are too cryptic

2018-06-15 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-174:
--

 Summary: Shell error messages are too cryptic
 Key: HDDS-174
 URL: https://issues.apache.org/jira/browse/HDDS-174
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
 Fix For: 0.2.1


Error messages in the Ozone shell are often too cryptic. e.g.
{code}
$ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
Command Failed : Create key failed, error:INTERNAL_ERROR
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-174) Shell error messages are often cryptic

2018-06-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-174:
---
Summary: Shell error messages are often cryptic  (was: Shell error messages 
are too cryptic)

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Critical
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13686:
--
Description: Currently, we have R/W FSNamesystemLock metrics per operation. 
I'd be useful to have an overall metric too.

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>
> Currently, we have R/W FSNamesystemLock metrics per operation. I'd be useful 
> to have an overall metric too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-06-15 Thread Chris Douglas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-13186:
-
Summary: [PROVIDED Phase 2] Multipart Uploader API  (was: [PROVIDED Phase 
2] Multipart Multinode uploader API + Implementations)

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13686 started by Lukas Majercak.
-
> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13686:
--
Attachment: HDFS-13686.000.patch

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13686:
--
Status: Patch Available  (was: In Progress)

> Add overall metrics for FSNamesystemLock
> 
>
> Key: HDFS-13686
> URL: https://issues.apache.org/jira/browse/HDFS-13686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13686.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13686) Add overall metrics for FSNamesystemLock

2018-06-15 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-13686:
-

 Summary: Add overall metrics for FSNamesystemLock
 Key: HDFS-13686
 URL: https://issues.apache.org/jira/browse/HDFS-13686
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, namenode
Reporter: Lukas Majercak
Assignee: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-141) Remove PipeLine Class from SCM and move the data field in the Pipeline to ContainerInfo

2018-06-15 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514188#comment-16514188
 ] 

Mukul Kumar Singh commented on HDDS-141:


Thanks for the updated patch [~shashikant]. +1 the patch looks good to me.
I will commit this shortly.

> Remove PipeLine Class from SCM and move the data field in the Pipeline to 
> ContainerInfo
> ---
>
> Key: HDDS-141
> URL: https://issues.apache.org/jira/browse/HDDS-141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-141.00.patch, HDDS-141.01.patch, HDDS-141.02.patch, 
> HDDS-141.03.patch, HDDS-141.04.patch
>
>
> Pipeline class currently differs from the pipelineChannel with the data 
> field, this field was introduced with HDFS-8 to maintain per container 
> local data. However, this data field can be moved to the ContainerInfo class 
> and then the pipelineChannel can be used interchangeably with pipeline 
> everywhere. This will help with making code being cleaner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514175#comment-16514175
 ] 

Hudson commented on HDFS-13673:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14435 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14435/])
HDFS-13673. TestNameNodeMetrics fails on Windows. Contributed by Zuoming 
(inigoiri: rev 43d994e4a6dfd1c24eafb909d6f8a0663b20769a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java


> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514161#comment-16514161
 ] 

Íñigo Goiri commented on HDFS-13563:


The tests now pass 
[here|https://builds.apache.org/job/hadoop-trunk-win/498/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/].

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Lukas Majercak
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch, 
> HDFS-13563.002.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13673:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: (was: 2.9.1)
  (was: 3.1.0)
  3.0.4
  2.9.2
  3.1.1
  3.2.0
  2.10.0
Target Version/s: 2.9.1, 3.1.0  (was: 3.1.0, 2.9.1)
  Status: Resolved  (was: Patch Available)

Thanks [~zuzhan] for the patch.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13673) TestNameNodeMetrics fails on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514131#comment-16514131
 ] 

Íñigo Goiri commented on HDFS-13673:


Let's go with this one for now and we can figure the flaky one later.
+1 on  [^HDFS-13673.001.patch].
Committing.

> TestNameNodeMetrics fails on Windows
> 
>
> Key: HDFS-13673
> URL: https://issues.apache.org/jira/browse/HDFS-13673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13673.000.patch, HDFS-13673.001.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt, 
> TestNameNodeMetrics-testVolumeFailures-Report.001.txt
>
>
> _TestNameNodeMetrics_ fails on Windows
>  
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514126#comment-16514126
 ] 

Íñigo Goiri commented on HDFS-13676:


HDFS-6440 is the support for more than 2 Namenodes.
So branch-3.0 would be the start for  [^HDFS-13676.001.patch].

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread Zuoming Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514120#comment-16514120
 ] 

Zuoming Zhang commented on HDFS-13676:
--

[~elgoiri]

Here is the issue link:

https://issues.apache.org/jira/browse/HDFS-6440

Here is the github link:

[https://github.com/apache/hadoop/commit/49dfad942970459297f72632ed8dfd353e0c86de]

>From the issue description, it's checked-in in 3.0.0.

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2018-06-15 Thread Andrew Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514115#comment-16514115
 ] 

Andrew Wang commented on HDFS-13671:


I'm fine with reverting if we're seeing production issues. I wasn't that 
involved with HDFS-9260 except to try and answer Daryn's questions about 
real-world performance.

Given that there seems to be a lot more interest in maintaining the older 
version, I'm also inclined to revert for maintenance purposes.

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Priority: Major
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> since It will take additional time to balance tree node. When there are large 
> block to be removed/deleted, it looks bad.
> For the get type operations in {{DatanodeStorageInfo}}, we only provide the 
> {{getBlockIterator}} to return blocks iterator and no other get operation 
> with specified block. Still we need to use {{FoldedTreeSet}} in 
> {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not 
> Update. Maybe we can revert this to the early implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-15 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-173:
---

Assignee: Hanisha Koneru

> Refactor Dispatcher and implement Handler for new ContainerIO design
> 
>
> Key: HDDS-173
> URL: https://issues.apache.org/jira/browse/HDDS-173
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> Dispatcher will pass the ContainerCommandRequests to the corresponding 
> Handler based on the ContainerType. Each ContainerType will have its own 
> Handler. The Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-15 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-173:
---

 Summary: Refactor Dispatcher and implement Handler for new 
ContainerIO design
 Key: HDDS-173
 URL: https://issues.apache.org/jira/browse/HDDS-173
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru


Dispatcher will pass the ContainerCommandRequests to the corresponding Handler 
based on the ContainerType. Each ContainerType will have its own Handler. The 
Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-15 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514096#comment-16514096
 ] 

Arpit Agarwal commented on HDDS-167:


Preliminary patch for Jenkins run.

Still need to fix acceptance tests.

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Attachment: HDDS-167.01.patch

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Status: Patch Available  (was: In Progress)

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-15 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514067#comment-16514067
 ] 

Chao Sun commented on HDFS-12976:
-

[~xkrogen]: let me know if I misunderstood anything - if someone is using 
{{ConfiguredFailoverProxyProvider}} and received a read request, then the proxy 
may forward this request to a observer node, right? the observer node will then 
happily process this request without throwing any exception. This doesn't look 
like the right behavior as in future we may add some client-side flag to 
control whether read requests should go to observer or not. 

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514055#comment-16514055
 ] 

Erik Krogen edited comment on HDFS-12976 at 6/15/18 4:31 PM:
-

Hey [~csun], I don't see why the changes to 
{{ConfiguredFailoverProxyProvider#performFailover()}} are necessary. The intent 
as I understand it is that you increment the proxy index, then try it, and if 
the proxy is in the wrong state it will throw an exception and 
{{performFailover()}} will be called again. This process should work fine 
without the {{isObserverState}} check.


was (Author: xkrogen):
Hey [~csun], I don't see why the changes to 
{{ConfiguredFailoverProxyProvider#performFailover()}} are necessary. The intent 
is that you increment the proxy index, then try it, and if the proxy is in the 
wrong state it will throw an exception and {{performFailover()}} will be called 
again. This process should work fine without the {{isObserverState}} check.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514055#comment-16514055
 ] 

Erik Krogen commented on HDFS-12976:


Hey [~csun], I don't see why the changes to 
{{ConfiguredFailoverProxyProvider#performFailover()}} are necessary. The intent 
is that you increment the proxy index, then try it, and if the proxy is in the 
wrong state it will throw an exception and {{performFailover()}} will be called 
again. This process should work fine without the {{isObserverState}} check.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514048#comment-16514048
 ] 

Íñigo Goiri commented on HDFS-13681:


[~surmountian], can you remove the native-client comment?
Other than that, it looks good; the typical Windows fixes.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514043#comment-16514043
 ] 

Íñigo Goiri commented on HDFS-13676:


[^HDFS-13676.001.patch] looks good now.
As [~zuzhan] said, the failures don't seem related.
Which patch applies to which branches?
branch-2 and branch-2.9 use  [^HDFS-13676-branch-2.000.patch] for sure but I'm 
not sure when the change for  [^HDFS-13676.001.patch] kicked in.

+1

I'll hold this to give a chance for [~daryn] to double check.
I think the code was wrongly commented but I'd like to double check.

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-15 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514038#comment-16514038
 ] 

Daniel Templeton commented on HDFS-13448:
-

I agree with [~belugabehr] that building a test that uses mocks the way 
[~daryn] suggested sounds brittle and doesn't actually test the desired 
outcome.  On the other hand, testing for random distribution is just 
intentionally building a flaky test.  I don't really see any alternatives, 
unfortunately.  This is a case of 'can't prove a negative.'  The only way to 
properly test that the policy isn't playing favorites for the first replica is 
to test the internal logic where that happens.  I don't see a better solution 
that what [~daryn] suggested.

Two additional comments on the patch.  If we leave {{clientNode}} as {{null}}, 
we get {{null}} added to the list of excluded nodes.  Strictly speaking that's 
not a problem, but it feels like we're planting a land mine to blow someone up 
later.  Also, it would be nice to add a comment where we leave {{clientNode}} 
null to say that it's on purpose so that a developer who comes later doesn't 
think it's a mistake.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2018-06-15 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514036#comment-16514036
 ] 

Xiao Chen commented on HDFS-13671:
--

Looking into this today and probably discuss with [~andrew.wang]. Will get back 
soon

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Priority: Major
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> since It will take additional time to balance tree node. When there are large 
> block to be removed/deleted, it looks bad.
> For the get type operations in {{DatanodeStorageInfo}}, we only provide the 
> {{getBlockIterator}} to return blocks iterator and no other get operation 
> with specified block. Still we need to use {{FoldedTreeSet}} in 
> {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not 
> Update. Maybe we can revert this to the early implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-15 Thread Anatoli Shein (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514023#comment-16514023
 ] 

Anatoli Shein commented on HDFS-11520:
--

Okay, so it looks like shadedclient fails now on the same patch that was 
passing just 2 days ago.

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.HDFS-8707.000.patch, 
> HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-15 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Attachment: (was: HDFS-13682.01.patch)

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-15 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Attachment: HDFS-13682.01.patch

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-15 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513312#comment-16513312
 ] 

Xiao Chen edited comment on HDFS-13682 at 6/15/18 4:08 PM:
---

Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is not empty).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change. Patch 1 tries 
to do this.

 

IMO we should still consider explicitly doing the KMS calls in the NN using the 
NN login ugi, this applies to both the {{getMetadata}} call during createEZ and 
the {{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in these cases. Can do this in a separate Jira if it sounds good 
to the audience.


was (Author: xiaochen):
Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change. Patch 1 tries 
to do this.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case. Can do this in a separate Jira if it sounds good to 
the audience.

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-15 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514017#comment-16514017
 ] 

genericqa commented on HDFS-11520:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 51m 
40s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  1s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-11520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927969/HDFS-11520.004.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 1b2fdcacd0a9 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3e37a9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24455/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24455/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24455/testReport/ |
| Max. process+thread count | 241 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24455/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>   

[jira] [Created] (HDFS-13685) Review and Refactor TestDFSOutputStream.java

2018-06-15 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13685:
--

 Summary: Review and Refactor TestDFSOutputStream.java
 Key: HDFS-13685
 URL: https://issues.apache.org/jira/browse/HDFS-13685
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.1.0, 2.10.0
Reporter: BELUGA BEHR


Remove use of deprecated class {{org.apache.hadoop.test.Whitebox}}

Refactor the necessary code to make mocking easier



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13684) Remove Use of Deprecated Whitebox Test Class

2018-06-15 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13684:
--

 Summary: Remove Use of Deprecated Whitebox Test Class
 Key: HDFS-13684
 URL: https://issues.apache.org/jira/browse/HDFS-13684
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 3.1.0, 2.10.0
Reporter: BELUGA BEHR


Unit tests within the Hadoop suite are using the now deprecated class 
{{org.apache.hadoop.test.Whitebox}}.

bq.  This class was ported from org.mockito.internal.util.reflection.Whitebox 
since the class was removed in Mockito 2.1. Using this class is a bad practice. 
Consider refactoring instead of using this.

As stated, refactor the existing tests to remove calls to this class... then 
remove this class from the project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >