[jira] [Commented] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669731#comment-15669731
 ] 

Hadoop QA commented on HDFS-11142:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.TestFileCreationDelete |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11142 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839119/HDFS-11142.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 512ab26ad7e0 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 61c0bed |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17583/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17583/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17583/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-11142
> URL: https://issues.apache.org/jira/browse/HDFS-11142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun 

[jira] [Commented] (HDFS-11113) Document dfs.client.read.striped configuration in hdfs-default.xml

2016-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669708#comment-15669708
 ] 

Rakesh R commented on HDFS-3:
-

Could someone helps in reviews and push this doc changes. Thanks!

> Document dfs.client.read.striped configuration in hdfs-default.xml
> --
>
> Key: HDFS-3
> URL: https://issues.apache.org/jira/browse/HDFS-3
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-3-00.patch, HDFS-3-01.patch
>
>
> {{dfs.client.read.striped.threadpool.size}} should be covered in 
> hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10368:

Attachment: HDFS-10368-03.patch

Attached another patch fixing test case failures and style warnings.

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch, 
> HDFS-10368-02.patch, HDFS-10368-03.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2016-11-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-6874:
--
Attachment: HDFS-6874.02.patch

> Add GET_BLOCK_LOCATIONS operation to HttpFS
> ---
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.patch
>
>
> GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-15 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669652#comment-15669652
 ] 

Jingcheng Du commented on HDFS-9668:


Thanks for the comments [~eddyxu]!
bq. Could you help to clarify what does datasetWriteLock protect? I thought it 
protected FsDatasetImpl#volumes. Because for FsDatasetImpl#volumeMap, some 
functions like createRbw and finalizeBlock use readLock to protect 
volumeMap#add, while in moveBlock() it uses writeLock to protect 
finalizedReplica(), which seems to protect volumeMap.add() as well?
You are right, the dataset lock is used to protected the volumes. I use write 
lock for the write operations on the volumes and read lock for the read 
operations.
In the methods readRbw, etc., it only reads from volumes, a read lock is enough 
to protect the modification of the volumes during the read on volumes and 
replicaMap. Meanwhile in this method, we need to synchronize the operations for 
the same block, so I add a block-related lock.
About the lock in moveBlock(), I think we can use the combination of a read 
lock and block-related lock here. Using 
bq. And in unfinalizedBlock, it uses read lock to protect volumeMap.remove()?
Actually, it uses a read lock and a block-related lock to protect the 
volueMap.remove().
First of all, the replicaMap has a mutex to synchronize the read and write 
methods in it to avoid ConcurrentModificationExxcption (Now in the trunk, it is 
a lock not a mutex, but I change it back to mutex in the patch). Now the 
question is if it's safe to allow different blocks access the createRbw and 
other similar block-related methods at the same time? Yes?
bq. In a few places, it looks to me that we should use read lock, i.e., 
FsDatasetImpl#getBlockReports(), getFinalizedBlocks() moveBlockAcrossStorage(), 
moveBlock().
For moveBlockAcrossStorage(), moveBlock(), you are right, a read lock is enough 
since these methods only read volumes.
For getBlockReports(), getFinalizedBlocks(), I guess we have to use write lock. 
These methods iterate the replicas from repliaMap, and it is not safe to allow 
these methods and createRbw, etc. to run at the same time 
(ConcurrentModificationException might occur during the iteration if the 
replicaMap is changed by createRbw). 
bq. If the replicaMap is protected by read/write lock, does replicaMap still 
need the mutex?
It does. It is a good way to synchronize the read and write operations in the 
replicaMap if we use a read-write lock in FsDatasetImpl.
bq. Should moveBlock() hold readLock and block lock?
I think it can.




> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, 
> HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, 
> HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> 

[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669637#comment-15669637
 ] 

Vinayakumar B commented on HDFS-11134:
--

bq. If this sentence is correct, we need to specify the same port. Hi Yiqun Lin 
and Brahma Reddy Battula, what do you think? If this is true, I'm thinking it's 
better to retry the test when BindException occurs.
I dont think for any test in {{TestRenameWhileOpen}} same namenode port is 
required. Because all the streams and clients created before restart were 
unused after restart.
I have verified in my local machine that without specifying the name all tests 
passed. There could be streamer threads running with different namenode 
addresses, which wont harm the tests. So for safety, these streams could be 
aborted and old clients could be closed before creating new client after 
restart.

For {{TestPendingInvalidateBlock}}, its not only 
{{cluster.restartDataNode(..);}} which tries to restart the datanodes in same 
port, but {{cluster.restartNameNode(..);}} also restarts the namenode in same 
port. In-fact {{cluster.restartNameNode(..);}} calls are more than 
{{cluster.restartDataNode(..);}}.
So I dont think changing only {{cluster.restartDataNode(..);}} will solve 
failures in {{TestPendingInvalidateBlock}}.


> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch, HDFS-11134.002.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11126) Ozone: Add small file support RPC

2016-11-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11126:

Attachment: HDFS-11126-HDFS-7240.002.patch

Updated patch to take care of the review comments.

> Ozone: Add small file support RPC
> -
>
> Key: HDFS-11126
> URL: https://issues.apache.org/jira/browse/HDFS-11126
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11126-HDFS-7240.001.patch, 
> HDFS-11126-HDFS-7240.002.patch
>
>
> Add an RPC to send both data and metadata together in one RPC. This is useful 
> when we want to read and write small files, say less than 1 MB. This API is 
> very useful for ozone and cBlocks (HDFS-8)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11142:
-
Status: Patch Available  (was: Open)

Attach a initial patch.

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-11142
> URL: https://issues.apache.org/jira/browse/HDFS-11142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11142.001.patch, test-fails-log.txt
>
>
> The test 
> {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} fails 
> in trunk. I looked into this, it seemed the long-time gc caused the datanode 
> to be shutdown unexpectedly when did the large block reporting. And then the 
> NPE threw in the test. The related output log:
> {code}
> 2016-11-15 11:31:18,889 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(415)) - Successfully sent block report 
> 0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 49 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
> 2016-11-15 11:31:18,890 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(696)) - Got finalize command 
> for block pool BP-814229154-172.17.0.3-1479209475497
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4936ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4898ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
> {code}
> The stack infos:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11142:
-
Attachment: HDFS-11142.001.patch

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-11142
> URL: https://issues.apache.org/jira/browse/HDFS-11142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11142.001.patch, test-fails-log.txt
>
>
> The test 
> {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} fails 
> in trunk. I looked into this, it seemed the long-time gc caused the datanode 
> to be shutdown unexpectedly when did the large block reporting. And then the 
> NPE threw in the test. The related output log:
> {code}
> 2016-11-15 11:31:18,889 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(415)) - Successfully sent block report 
> 0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 49 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
> 2016-11-15 11:31:18,890 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(696)) - Got finalize command 
> for block pool BP-814229154-172.17.0.3-1479209475497
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4936ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4898ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
> {code}
> The stack infos:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11075) Provide a tool to help covert existing file of default 3x replication to EC striped layout

2016-11-15 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669470#comment-15669470
 ] 

SammiChen commented on HDFS-11075:
--

Thanks [~tasanuma0829] and [~demongaorui] for notify me the duplication! Sure, 
I will close this JIRA and would like to take task HDFS-7717. Thanks!

> Provide a tool to help covert existing file of default 3x replication to EC 
> striped layout
> --
>
> Key: HDFS-11075
> URL: https://issues.apache.org/jira/browse/HDFS-11075
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: SammiChen
>Assignee: SammiChen
>
> Set erasure coding policy to an existing 3x replication file, or change the 
> current existing erasure coding policy of an existing file to another erasure 
> coding policy will force the file to been transformed from one redundancy 
> layout into another one. The converting usually will be time consuming. This 
> task is to provide a new tool or improve an existing tool to facilitate the 
> HDFS administrator to smooth this converting process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669449#comment-15669449
 ] 

Hadoop QA commented on HDFS-10368:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 54s{color} | {color:orange} root: The patch generated 19 new + 1810 
unchanged - 15 fixed = 1829 total (was 1825) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestSetTimes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10368 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839107/HDFS-10368-02.patch |
| Optional Tests |  asflicense  

[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669388#comment-15669388
 ] 

Rakesh R commented on HDFS-10802:
-

Thank you [~yuanbo] for the good work.
It looks like test case failures are unrelated to the patch.

[~umamaheswararao], I think the patch is ready to go in. Please let me know 
your feedback. Thanks!

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802-HDFS-10285.004.patch, HDFS-10802-HDFS-10285.005.patch, 
> HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669349#comment-15669349
 ] 

Hadoop QA commented on HDFS-10802:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-10285 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839104/HDFS-10802-HDFS-10285.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux b63c40613f8a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 5686f56 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17581/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17581/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-11075) Provide a tool to help covert existing file of default 3x replication to EC striped layout

2016-11-15 Thread Rui Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669314#comment-15669314
 ] 

Rui Gao commented on HDFS-11075:


Hi [~Sammi] and [~tasanuma0829], yes this Jira is duplicate of HDFS-7717 .

How about close this one, and I will reassign HDFS-7717 to you, Sammi?

> Provide a tool to help covert existing file of default 3x replication to EC 
> striped layout
> --
>
> Key: HDFS-11075
> URL: https://issues.apache.org/jira/browse/HDFS-11075
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: SammiChen
>Assignee: SammiChen
>
> Set erasure coding policy to an existing 3x replication file, or change the 
> current existing erasure coding policy of an existing file to another erasure 
> coding policy will force the file to been transformed from one redundancy 
> layout into another one. The converting usually will be time consuming. This 
> task is to provide a new tool or improve an existing tool to facilitate the 
> HDFS administrator to smooth this converting process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669275#comment-15669275
 ] 

Hadoop QA commented on HDFS-11058:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m  8s{color} 
| {color:red} root generated 3 new + 688 unchanged - 3 fixed = 691 total (was 
691) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} root: The patch generated 0 new + 143 unchanged - 4 
fixed = 143 total (was 147) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 
43s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewfsFileStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839089/HDFS-11058.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf9dcb9792d8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 61c0bed |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17578/artifact/patchprocess/diff-compile-javac-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17578/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17578/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669249#comment-15669249
 ] 

Hadoop QA commented on HDFS-11134:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839097/HDFS-11134.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b4c8356d78c6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 61c0bed |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17580/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17580/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17580/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: 

[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669205#comment-15669205
 ] 

Rakesh R commented on HDFS-10368:
-

Attached new patch addressing [~andrew.wang]'s comments.

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch, 
> HDFS-10368-02.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11126) Ozone: Add small file support RPC

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669206#comment-15669206
 ] 

Xiaoyu Yao commented on HDFS-11126:
---

The patch looks good to me as well. Just few NITs below. +1 after that is fixed.

*ContainerProtocolCalls.java*
Line 46:  “Implementation of all container protocol calls performed by .” Can 
you elaborate the missing part after “by”?

*FileUtils.java*
Line 41: NIT: getputFileResponse should be getPutFileResponse
Line 60: getSmallFileResponse -> getGetSmallFileResponse? This looks a bit 
strange though. 

*ContainerTestHelper.java*
Line 116: Missing java doc for pipeline parameter


> Ozone: Add small file support RPC
> -
>
> Key: HDFS-11126
> URL: https://issues.apache.org/jira/browse/HDFS-11126
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11126-HDFS-7240.001.patch
>
>
> Add an RPC to send both data and metadata together in one RPC. This is useful 
> when we want to read and write small files, say less than 1 MB. This API is 
> very useful for ozone and cBlocks (HDFS-8)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669203#comment-15669203
 ] 

Rakesh R commented on HDFS-10368:
-

Thank you. Used {{TimeUnit.MILLISECONDS.sleep}}.

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch, 
> HDFS-10368-02.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669201#comment-15669201
 ] 

Hadoop QA commented on HDFS-11140:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839094/HDFS-11140.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48fa9716ddc5 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 61c0bed |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17579/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17579/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17579/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue 

[jira] [Updated] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10368:

Attachment: HDFS-10368-02.patch

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch, 
> HDFS-10368-02.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-15 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669181#comment-15669181
 ] 

Manoj Govindassamy commented on HDFS-5692:
--

Thanks for the views. Yes, {{NotInMountPointException}} will be misleading as 
you explained.  On peer discussions I heard views against option 1. So, shall 
we wait a bit more to hear other alternatives before moving with Option 1 ?

Another option is, we can make {{ViewFileSystem#listStatus}} catch the 
FileNotFoundException and throw the same with a its version of Path which is 
not fully resolved. 


> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10802:
--
Attachment: HDFS-10802-HDFS-10285.005.patch

Thanks for your comments [~rakeshr]
upload v5 patch.

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802-HDFS-10285.004.patch, HDFS-10802-HDFS-10285.005.patch, 
> HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669168#comment-15669168
 ] 

Lei (Eddy) Xu commented on HDFS-9668:
-

Hi, [~jingcheng...@intel.com]

Thanks for taking feedbacks quickly.  I have a few questions regarding to the 
latest patch {{v-23}}. 

* Could you help to clarify what does {{datasetWriteLock}} protect? I thought 
it protected {{FsDatasetImpl#volumes}}.  Because for 
{{FsDatasetImpl#volumeMap}}, some functions like {{createRbw}} and 
{{finalizeBlock}} use {{readLock}} to protect {{volumeMap#add}}, while in 
{{moveBlock()}} it uses {{writeLock}} to protect {{finalizedReplica()}}, which 
seems to protect {{volumeMap.add()}} as well?

* And in {{unfinalizedBlock}}, it uses {{read lock}} to protect 
{{volumeMap.remove()}}?
* In a few places, it looks to me that we should use {{read lock}}, i.e., 
{{FsDatasetImpl#getBlockReports()}}, {{getFinalizedBlocks()}} 
{{moveBlockAcrossStorage()}}, {{moveBlock()}}. 

* If the {{replicaMap}} is protected by {{read/write lock}},  does 
{{replicaMap}} still need the {{mutex}}?
* Should {{moveBlock()}} hold {{readLock}} and {{block lock}}?

* [~xiaochen] also mentioned that {{private fianlizeReplica()}} are called from 
different places with different locking policies. For example, {{public void 
finalizeBlock()}} uses read lock while in {{moveBlock()}} it uses write lock.

Overall, it is a great work that would benefit all. Shall we get some consensus 
of the purposes for each lock ({{read/write}} locks and {{ReplicaMap#mutex}}) 
before we commit this ?

Ping [~arpitagarwal], do the above concerns make sense?

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, 
> HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, 
> HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> 

[jira] [Created] (HDFS-11145) Implement getTrashRoot() for ViewFileSystem

2016-11-15 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11145:
-

 Summary: Implement getTrashRoot() for ViewFileSystem
 Key: HDFS-11145
 URL: https://issues.apache.org/jira/browse/HDFS-11145
 Project: Hadoop HDFS
  Issue Type: Task
  Components: federation
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


ViewFileSystem doesn't have the custom implementation of 
FileSystem#getTrashRoot(Path) yet and hence irrespective of Paths passed in, 
ViewFileSystem always returns the user specific .Trash directory. 

ViewFileSystem should implement getTrashRoot(Path) and delegate the call to the 
respective mounted file system which can then examine about EZ or other 
criteria and return a proper Trash directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11108) Ozone: use containers with the state machine

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669120#comment-15669120
 ] 

Xiaoyu Yao commented on HDFS-11108:
---

+1 pending Jenkins and the dependency on HDFS-11081 .

> Ozone: use containers with the state machine
> 
>
> Key: HDFS-11108
> URL: https://issues.apache.org/jira/browse/HDFS-11108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11108-HDFS-7240.001.patch, 
> HDFS-11108-HDFS-7240.002.patch, HDFS-11108-HDFS-7240.003.patch
>
>
> Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11138) Block Storage : add block storage server

2016-11-15 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669116#comment-15669116
 ] 

Anu Engineer commented on HDFS-11138:
-

Thank you for the patch, overall it looks very good. I have a bunch of very 
minor comments. Since the patch is big, I might add more comments little later. 
Please feel free to post a second patch if you like. Also for some reason we 
did not get a Jenkins run, which would have given you all the CheckStyle and 
other issues.

1. Could you please run checkstyle. I see a bunch of checkstyle warnings.
2. {{CBlockConfigKeys.java:21}} Comment about ozone, I think you meant cBlocks.

3. {{CBlockConfigKeys.java:39}}
//The port on CBlockManager node for jSCSI to ask
{noformat}
public static final int DFS_CBLOCK_JSCSI_PORT_DEFAULT = 50701;
public static final int DFS_CBLOCK_RPCSERVICE_PORT_DEFAULT = 50700;
{noformat}
I realize that you are trying to be consistent, however trunk has changed its 
port map.
Namenode ports 
 
50470 --> 9871
50070 --> 9870
8020 --> 9820

Secondary NN ports 
--- 
50091 --> 9869
50090 --> 9868

Datanode ports 
--- 
50020 --> 9867
50010 --> 9866
50475 --> 9865
50075 --> 9864
I know ozone still uses the old port map, we might have to go an fix it. You 
might want to use a port that is free but closer to 98xx series.

4. {{public static final String DFS_CBLOCK_RPCSERVICE_IP_DEFAULT = 
"127.0.0.1";}}
Did you want this to be 127.0.0.1 or 0.0.0.0 ? I would think you might want to 
listen on 0.0.0.0
5.
{noformat}
58public static final String DFS_CBLOCK_SERVICE_DELETE_FORCE_KEY =
59"dfs.cblock.service.delete-force";
60public static final boolean DFS_CBLOCK_SERVICE_DELETE_FORCE_DEFAULT =
61false;
{noformat}

Slightly confused about how we use this, We have RPCs which take forceDelete as 
a parameter, we also have forceDelete as a config key. I am trying to decide 
which one takes precedence and why we need both.

6. More of a comment  I not asking for a change here. We probably have 2 RPCs 
from jSCSI server to cBlock server.  getContainers and getLease. While I like 
the fact that you have separated the interface that CLI depends upon and what 
jscsi depends upon, I was wondering it is extra work. But now that it is done, 
we should probably put that in.

7. {{ public void join()}} After we catch an exception, would you please add an 
interrupt call ?  
{noformat} 
catch (InterruptedException e) {
  Thread.currentThread().interrupt();

{noformat}
This is one of the dark corners of Java threading.

8. {{start, stop and join}} We have null check only in stop, I don't know if we 
need it.

9. Nit: Log.info , you can use arguments instead of + operator.

10. {{ContainerDescriptor}} Can we please add an Index variable that tells us 
what index this container is in the list of containers.

11. {{VolumeDescriptor.java}}
I am not able to understand why we have to maintain this map. Can you please 
explain the use case ? 
 {{private HashMap containerMap;}}

12. May I suggest that instead of converting this class to JSON and then 
persisting it to local file -- Which we happen to do in the ozone test 
implementation, I would suggest that we take the protobuf class as is -- and 
the do toByteArray -- which will give you a byte stream which you can easily 
persist to LevelDB. This works well for both keys and values. 

13. We have a debugging line left out in the code: Line 204:
 {{System.err.println("VolumeDesc...parse():" + jsonString);}}

14. {{CBlockServiceProtocol.java}}
 {noformat}
33void createVolume(String userName, String volumeName,
34long volumeSize, int blockSize) throws IOException;
35  
36void createVolume(String userName, String volumeName,
37long volumeSize) throws IOException;
{noformat}
 Do we need both version of create ? if we set up the blockSize to have a 
default size, then either the client can set the right value or you can rely on 
default values in the protoc. Same comment about delete.
You can easily remove the second call by moving this to client side.
{noformat}
try {
51if (request.hasBlockSize()) {
52  impl.createVolume(request.getUserName(), 
request.getVolumeName(),
53  request.getVolumeSize(), request.getBlockSize());
54} else{
55  impl.createVolume(request.getUserName(), 
request.getVolumeName(),
56  request.getVolumeSize());
57}
{noformat} 

15. {{MountVolumeResponse}} Return a structure with index in 
{{getContainerList}}. Please add some comments to this class, checkstyle is 
going to complain about this. Also missing Licence header in this class.

16. {{CBlockClientServerProtocolServerSideTranslatorPB}}
{noformat}
62  for (int i=0;i

[jira] [Commented] (HDFS-11117) Refactor striped file tests to allow flexibly test erasure coding policy

2016-11-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669113#comment-15669113
 ] 

Kai Zheng commented on HDFS-7:
--

Thanks [~Sammi] for the update. Looks like most of the check styles could be 
addressed, and {{TestDFSStripedOutputStreamWithFailure}} family would also need 
be refactored for the purpose.

> Refactor striped file tests to allow flexibly test erasure coding policy
> 
>
> Key: HDFS-7
> URL: https://issues.apache.org/jira/browse/HDFS-7
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-7-v1.patch, HDFS-7-v2.patch, 
> HDFS-7-v3.patch, HDFS-7-v4.patch
>
>
> This task is going to refactor current striped file test case structures, 
> especially {{StripedFileTestUtil}} file which is used in many striped file 
> test  cases. All current striped file test cases only support one erasure 
> coding policy, that's the default RS-DEFAULT-6-3-64k policy.  The goal of the 
> refactor is to make the structures more convenient to support other erasure 
> coding policies, such as XOR policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669080#comment-15669080
 ] 

Xiaoyu Yao commented on HDFS-11140:
---

Thanks [~linyiqun] for the udpate. +1 for v2 patch pending Jenkins.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch, HDFS-11140.002.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669080#comment-15669080
 ] 

Xiaoyu Yao edited comment on HDFS-11140 at 11/16/16 2:08 AM:
-

Thanks [~linyiqun] for the update. +1 for v2 patch pending Jenkins.


was (Author: xyao):
Thanks [~linyiqun] for the udpate. +1 for v2 patch pending Jenkins.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch, HDFS-11140.002.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669076#comment-15669076
 ] 

Xiaoyu Yao commented on HDFS-11081:
---

Thanks [~anu] for the update. 

There are still two check style issues that you can fix at commit time. I don't 
think we need to trigger another Jenkins for that. 
For the findbugs issue, we can address with some following up ticket for adding 
exceptions for those generated classes.

+1 for patch v5. 

> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch, HDFS-11081-HDFS-7240.004.patch, 
> HDFS-11081-HDFS-7240.005.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669072#comment-15669072
 ] 

Yiqun Lin commented on HDFS-11134:
--

{quote}
can you update the patch..?
{quote}
Done. Post the v002 patch. This will be a safe way.

> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch, HDFS-11134.002.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11134:
-
Attachment: HDFS-11134.002.patch

> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch, HDFS-11134.002.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11144) TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception

2016-11-15 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-11144:
---

 Summary: TestFileCreationDelete#testFileCreationDeleteParent fails 
wind bind exception
 Key: HDFS-11144
 URL: https://issues.apache.org/jira/browse/HDFS-11144
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
java.net.BindException: Problem binding to [localhost:57908] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:535)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
at org.apache.hadoop.ipc.Server.(Server.java:2667)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at 
org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:77)
{noformat}

 *Reference* 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/testReport/junit/org.apache.hadoop.hdfs/TestFileCreationDelete/testFileCreationDeleteParent/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669056#comment-15669056
 ] 

Brahma Reddy Battula commented on HDFS-11087:
-

That will be good idea .Everybody will aware of it before committing to 
branch-2.7 and we might not miss any entry.

bq.Should we consider adding a pre-commit hook?
can we do this..? 

Even I sent [mail in 
common-dev|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201609.mbox/browser]
 (sub: Updation of Change.txt in branch-2.7 on Sept-30),but I did not hear any 
thoughts there.
If we need any discussion in mailing list ,may be we can use this..?

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.9.0, 2.7.4
>
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087-branch-2.001.patch, HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669017#comment-15669017
 ] 

Yiqun Lin edited comment on HDFS-11140 at 11/16/16 1:44 AM:


Thanks the comment, [~xyao]. Post the new patch to address the comment.


was (Author: linyiqun):
Thanks the comment, [~xyao]. Post the comments to address the comment.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch, HDFS-11140.002.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11140:
-
Attachment: HDFS-11140.002.patch

Thanks the comment, [~xyao]. Post the comments to address the comment.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch, HDFS-11140.002.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11090) Leave safemode immediately if all blocks have reported in

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668959#comment-15668959
 ] 

Yiqun Lin commented on HDFS-11090:
--

Agree with the [~andrew.wang]. If we are sure to make a change, I think it's 
better to improve the logic of leaving safemode when there is "empty cluster".

> Leave safemode immediately if all blocks have reported in
> -
>
> Key: HDFS-11090
> URL: https://issues.apache.org/jira/browse/HDFS-11090
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
> Attachments: HDFS-11090.001.patch
>
>
> Startup safemode is triggered by two thresholds: % blocks reported in, and 
> min # datanodes. It's extended by an interval (default 30s) until these two 
> thresholds are met.
> Safemode extension is helpful when the cluster has data, and the default % 
> blocks threshold (0.99) is used. It gives DNs a little extra time to report 
> in and thus avoid unnecessary replication work.
> However, we can leave startup safemode early if 100% of blocks have reported 
> in.
> Note that operators sometimes change the % blocks threshold to > 1 to never 
> automatically leave safemode. We should maintain this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-15 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11058:
--
Attachment: HDFS-11058.03.patch

Thanks for the review [~andrew.wang]. Incorporated your review comments. Please 
take a look.
# Sure, moved back ViewFsMountPoint as a nested class under ViewFileSystem. 
# Now that ViewFileSystemMountPoint is not going to be a public class, 
interface annotations are not needed. 
# The reason I chose to rename them is because there are three versions of 
MountPoint (ViewFileSystem, ViewFs, InodeTree) and it could be confusing on 
which one being used. I made the external caller {{FsUsage}} to explicitly 
specify prefix the parent class name so as to avoid confusion.
# yes, already added {{TestViewFileSystemHdfs#testDf}} to verify the human 
readable behavior of Df command. Test verifies the following
## When the _fs_ is not _viewfs://__, Df command should not display _Mounted 
on_ column
## When the _fs_ is _viewfs://__, Df command should display _Mounted on_ column
## Various Paths and their corresponding mounted filesystem usages
# Removed the extra white space in the definition for updateMountPointFsStatus.
# Added one more test {{ViewFileSystemBaseTest#testtestViewFileSystemUtil}} to 
verify all {{ViewFileSystemUtil}} contracts


> Implement 'hadoop fs -df' command for ViewFileSystem   
> ---
>
> Key: HDFS-11058
> URL: https://issues.apache.org/jira/browse/HDFS-11058
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: viewfs
> Attachments: HDFS-11058.01.patch, HDFS-11058.02.patch, 
> HDFS-11058.03.patch
>
>
> Df command doesn't seem to work well with ViewFileSystem. It always reports 
> used data as 0. Here is the client mount table configuration I am using 
> against a federated clusters of 2 NameNodes and 2 DataNoes. 
> {code}
>   1 
>   2 
>   3   
>   4 fs.defaultFS
>   5 viewfs://ClusterX/
>   6   
>   ..
>  11   
>  12 fs.default.name
>  13 viewfs://ClusterX/
>  14   
>  ..
>  23   
>  24 fs.viewfs.mounttable.ClusterX.link./nn0
>  25 hdfs://127.0.0.1:50001/
>  26   
>  27   
>  28 fs.viewfs.mounttable.ClusterX.link./nn1
>  29 hdfs://127.0.0.1:51001/
>  30   
>  31   
>  32 fs.viewfs.mounttable.ClusterX.link./nn2
>  33 hdfs://127.0.0.1:52001/nn2
>  34   
>  35   
>  36 fs.viewfs.mounttable.ClusterX.link./nn3
>  37 hdfs://127.0.0.1:52001/nn3
>  38   
>  39   
>  40 fs.viewfs.mounttable.ClusterY.linkMergeSlash
>  41 hdfs://127.0.0.1:50001/
>  42   
>  43 
> {code}
> {{Df}} command always reports Size/Available as 8.0E and the usage as 0 for 
> any federated cluster. 
> {noformat}
> # hadoop fs -fs viewfs://ClusterX/ -df  /
> Filesystem Size  UsedAvailable  Use%
> viewfs://ClusterX/  9223372036854775807 0  92233720368547758070%
> # hadoop fs -fs viewfs://ClusterX/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterX/  8.0 E 0  8.0 E0%
> # hadoop fs -fs viewfs://ClusterY/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterY/  8.0 E 0  8.0 E0%
> {noformat}
> Whereas {{Du}} command seems to work as expected even with ViewFileSystem.
> {noformat}
> # hadoop fs -fs viewfs://ClusterY/ -du -h /
> 10.6 K  31.8 K  /build.log.16y
> 0   0   /user
> # hadoop fs -fs viewfs://ClusterX/ -du -h /
> 10.6 K  31.8 K  /nn0
> 0   0   /nn1
> 20.2 K  35.8 K  /nn3
> 40.6 K  34.3 K  /nn4
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable

2016-11-15 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-10533:

Hadoop Flags: Incompatible change

> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> HDFS-10533.004.patch, HDFS-10533.005.patch, HDFS-10533.006.patch, 
> HDFS-10533.007.patch, HDFS-10533.008.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10786) Erasure Coding: Add removeErasureCodingPolicy API

2016-11-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-10786.

Resolution: Duplicate

Closing since I think this is a dupe of HDFS-11072.

> Erasure Coding: Add removeErasureCodingPolicy API
> -
>
> Key: HDFS-10786
> URL: https://issues.apache.org/jira/browse/HDFS-10786
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xinwei Qin 
>  Labels: hdfs-ec-3.0-must-do
>
> HDFS-7859 has developed addErasureCodingPolicy API to add some user-added 
> Erasure Coding policies, and as discussed in HDFS-7859, we should also add 
> removeErasureCodingPolicy API to support removing some user-added Erasure 
> Coding Polices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11043) TestWebHdfsTimeouts fails

2016-11-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11043:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
> Attachments: org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11091) Implement a getTrashRoot that does not fall-back

2016-11-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11091:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Implement a getTrashRoot that does not fall-back
> 
>
> Key: HDFS-11091
> URL: https://issues.apache.org/jira/browse/HDFS-11091
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
>
> From HDFS-10756's 
> [discussion|https://issues.apache.org/jira/browse/HDFS-10756?focusedCommentId=15623755=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15623755]:
> {{getTrashRoot}} is supposed to return the trash dir considering encryption 
> zone. But if there's an error encountered (e.g. access control exception), it 
> falls back to the default trash dir.
> Although there is a warning message about this, it is still a somewhat 
> surprising behavior. The fall back was added by HDFS-9799 for compatibility 
> reasons. This jira is to propose we add a getTrashRoot that throws, which 
> will actually be more user-friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668836#comment-15668836
 ] 

Hadoop QA commented on HDFS-11081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 169 unchanged - 0 fixed = 171 total (was 169) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839077/HDFS-11081-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 90ba7765544c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 23eba15 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17577/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17577/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668776#comment-15668776
 ] 

Hadoop QA commented on HDFS-11094:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 27 new + 255 unchanged - 9 fixed = 282 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839065/HDFS-11094.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux e462c95850ca 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f121d0b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17576/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17576/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17576/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Created] (HDFS-11143) start.sh doesn't return any error message even namenode is not up.

2016-11-15 Thread Yufei Gu (JIRA)
Yufei Gu created HDFS-11143:
---

 Summary: start.sh doesn't return any error message even namenode 
is not up.
 Key: HDFS-11143
 URL: https://issues.apache.org/jira/browse/HDFS-11143
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yufei Gu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668728#comment-15668728
 ] 

Andrew Wang commented on HDFS-10996:


Thanks for working on this [~Sammi]. This seems like a good time to revisit the 
create API. How do you feel about a builder-based API? Otherwise, it's quite 
difficult for users to correctly fill in all the different parameters.

This refactor is something we could work on in a separate JIRA first to reduce 
the scope of the changes.

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, 
> HDFS-10996-v3.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11133) Ozone: Add allocateContainer RPC

2016-11-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668723#comment-15668723
 ] 

Chen Liang commented on HDFS-11133:
---

pending Jenkins, LGTM +1

> Ozone: Add allocateContainer RPC
> 
>
> Key: HDFS-11133
> URL: https://issues.apache.org/jira/browse/HDFS-11133
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: oz
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11133-HDFS-7240.001.patch
>
>
> Add allocateContainer RPC in SCM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11090) Leave safemode immediately if all blocks have reported in

2016-11-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668671#comment-15668671
 ] 

Andrew Wang commented on HDFS-11090:


Hi Konst, thanks for the insight,

bq. Andrew, "empty cluster" as a special case was always there. It was somewhat 
tempered when DataNodes where added to SafeMode conditions. I guess even if 
there are no blocks in the cluster you would at least want to wait for DN 
registrations to assure you have anything to write to.

You've exactly identified what I'd like to do with this JIRA. We have an empty 
cluster with the min datanodes threshold set to a non-zero value. HDFS still 
goes into safemode extension though.

> Leave safemode immediately if all blocks have reported in
> -
>
> Key: HDFS-11090
> URL: https://issues.apache.org/jira/browse/HDFS-11090
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
> Attachments: HDFS-11090.001.patch
>
>
> Startup safemode is triggered by two thresholds: % blocks reported in, and 
> min # datanodes. It's extended by an interval (default 30s) until these two 
> thresholds are met.
> Safemode extension is helpful when the cluster has data, and the default % 
> blocks threshold (0.99) is used. It gives DNs a little extra time to report 
> in and thus avoid unnecessary replication work.
> However, we can leave startup safemode early if 100% of blocks have reported 
> in.
> Note that operators sometimes change the % blocks threshold to > 1 to never 
> automatically leave safemode. We should maintain this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11081:

Attachment: HDFS-11081-HDFS-7240.005.patch

Updated to take care of review comments.

> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch, HDFS-11081-HDFS-7240.004.patch, 
> HDFS-11081-HDFS-7240.005.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668650#comment-15668650
 ] 

Andrew Wang commented on HDFS-10368:


Hi Rakesh, right now there's this line in ReplicationMonitor:

{code}
  Thread.sleep(replicationRecheckInterval);
{code}

I was suggesting that we could use TimeUnit.MILLISECONDS.sleep instead for 
additional clarity. Up to you though.

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11094:
---
Attachment: HDFS-11094.006.patch

New patch adds in {{INITIALIZING}} state to convert() methods to fix test 
failures. Optimized redundant code in convert() methods.

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch, 
> HDFS-11094.006.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668534#comment-15668534
 ] 

Xiaoyu Yao commented on HDFS-11081:
---

Thanks [~anu] for addressing my review comments. 
The patch v04 looks pretty good to me. +1 with just few NITs. 

OzoneClientUtils.java
Line 201-202: NIT: use local variable to avoid calling getHostName() twice. 
Line 253-254: same as above.

RunningDatanodeState.java
Line 223: NIT: extra space

VersionEndpointTask.JAVA
Line 61: NIT: can be moved up

> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch, HDFS-11081-HDFS-7240.004.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11108) Ozone: use containers with the state machine

2016-11-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11108:

Attachment: HDFS-11108-HDFS-7240.003.patch

[~xyao] Thanks for the comment about the exception. I have fixed that issue in 
this patch.

> Ozone: use containers with the state machine
> 
>
> Key: HDFS-11108
> URL: https://issues.apache.org/jira/browse/HDFS-11108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11108-HDFS-7240.001.patch, 
> HDFS-11108-HDFS-7240.002.patch, HDFS-11108-HDFS-7240.003.patch
>
>
> Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668444#comment-15668444
 ] 

Anu Engineer commented on HDFS-11081:
-

Test Failures are not related to this patch.

> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch, HDFS-11081-HDFS-7240.004.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668328#comment-15668328
 ] 

Hadoop QA commented on HDFS-11081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 169 unchanged - 0 fixed = 171 total (was 169) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839023/HDFS-11081-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 40e5e52b8a63 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 23eba15 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17575/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17575/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17575/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17575/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668315#comment-15668315
 ] 

Hudson commented on HDFS-8870:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10840 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10840/])
HDFS-8870. Lease is leaked on write failure. Contributed by Kuhu Shukla. 
(kihwal: rev 4fcea8a0c8019d6d9a5e6f315c83659938b93a40)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java


> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668304#comment-15668304
 ] 

Kihwal Lee edited comment on HDFS-8870 at 11/15/16 9:02 PM:


Committed this to trunk, branch-2 and branch-2.8. 
Do you have a patch for 2.7 and 2.6?


was (Author: kihwal):
Committed this to trunk, branch-2 and branch-2.8. 

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668304#comment-15668304
 ] 

Kihwal Lee commented on HDFS-8870:
--

Committed this to trunk, branch-2 and branch-2.8. 

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668300#comment-15668300
 ] 

Hadoop QA commented on HDFS-10930:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 481 unchanged - 12 fixed = 484 total (was 493) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
30s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception  
Obligation to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception  Obligation to clean up resource 
created at BlockPoolSlice.java:[line 718] is not discharged |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10930 |
| GITHUB PR | https://github.com/apache/hadoop/pull/160 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 483a4d7f4076 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5af572b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17574/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17574/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| 

[jira] [Updated] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8870:
-
Fix Version/s: 3.0.0-alpha2
   2.8.0

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8870:
-
Assignee: Kuhu Shukla  (was: Daryn Sharp)

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8870) Lease is leaked on write failure

2016-11-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668242#comment-15668242
 ] 

Kihwal Lee commented on HDFS-8870:
--

+1 The patch looks good.

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Kuhu Shukla
> Attachments: HDFS-8870.001.patch
>
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice

2016-11-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-10797:
-
Release Note: Disk usage summaries previously incorrectly counted files 
twice if they had been renamed (including files moved to Trash) since being 
snapshotted. Summaries now include current data plus snapshotted data that is 
no longer under the directory either due to deletion or being moved outside of 
the directory.  (was: Disk usage summaries previously incorrectly counted files 
twice if they had been renamed since being snapshotted. Summaries now include 
current data plus snapshotted data that is no longer under in the directory 
either due to deletion or being moved outside of the directory.)

> Disk usage summary of snapshots causes renamed blocks to get counted twice
> --
>
> Key: HDFS-10797
> URL: https://issues.apache.org/jira/browse/HDFS-10797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, 
> HDFS-10797.003.patch, HDFS-10797.004.patch, HDFS-10797.005.patch, 
> HDFS-10797.006.patch, HDFS-10797.007.patch, HDFS-10797.008.patch, 
> HDFS-10797.009.patch, HDFS-10797.010.patch, HDFS-10797.010.patch
>
>
> DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how 
> much disk usage is used by a snapshot by tallying up the files in the 
> snapshot that have since been deleted (that way it won't overlap with regular 
> files whose disk usage is computed separately). However that is determined 
> from a diff that shows moved (to Trash or otherwise) or renamed files as a 
> deletion and a creation operation that may overlap with the list of blocks. 
> Only the deletion operation is taken into consideration, and this causes 
> those blocks to get represented twice in the disk usage tallying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11081:

Attachment: HDFS-11081-HDFS-7240.004.patch

Addressed all test failures.

> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch, HDFS-11081-HDFS-7240.004.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667991#comment-15667991
 ] 

Arpit Agarwal edited comment on HDFS-10930 at 11/15/16 7:18 PM:


+1 pending Jenkins (just started a manual build).

I'll also hold off committing for a couple of days.


was (Author: arpitagarwal):
+1 pending Jenkins (just started a manual build).

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667991#comment-15667991
 ] 

Arpit Agarwal commented on HDFS-10930:
--

+1 pending Jenkins (just started a manual build).

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-15 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667978#comment-15667978
 ] 

Konstantin Shvachko commented on HDFS-11087:


Hey [~brahmareddy], looks like everybody is forgetting to update CHANGES.txt 
for branch-2.7. Should we consider adding a pre-commit hook?

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.9.0, 2.7.4
>
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087-branch-2.001.patch, HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11090) Leave safemode immediately if all blocks have reported in

2016-11-15 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667949#comment-15667949
 ] 

Konstantin Shvachko commented on HDFS-11090:


Andrew, "empty cluster" as a special case was always there. It was somewhat 
tempered when DataNodes where added to SafeMode conditions. I guess even if 
there are no blocks in the cluster you would at least want to wait for DN 
registrations to assure you have anything to write to.
Also don't give up on {{-D}} option easy. Not knowing your scripts, but one can 
safely assume that you have to format the cluster before first start, so it is 
a special case in startup, which sort of extends to the first startup.

> Leave safemode immediately if all blocks have reported in
> -
>
> Key: HDFS-11090
> URL: https://issues.apache.org/jira/browse/HDFS-11090
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
> Attachments: HDFS-11090.001.patch
>
>
> Startup safemode is triggered by two thresholds: % blocks reported in, and 
> min # datanodes. It's extended by an interval (default 30s) until these two 
> thresholds are met.
> Safemode extension is helpful when the cluster has data, and the default % 
> blocks threshold (0.99) is used. It gives DNs a little extra time to report 
> in and thus avoid unnecessary replication work.
> However, we can leave startup safemode early if 100% of blocks have reported 
> in.
> Note that operators sometimes change the % blocks threshold to > 1 to never 
> automatically leave safemode. We should maintain this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667941#comment-15667941
 ] 

Hadoop QA commented on HDFS-11094:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 24 new + 264 unchanged - 0 fixed = 288 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.qjournal.client.TestEpochsAreUnique |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.hdfs.qjournal.server.TestJournalNode |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.TestRollingUpgradeDowngrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839002/HDFS-11094.005.patch |
| Optional Tests |  

[jira] [Commented] (HDFS-11108) Ozone: use containers with the state machine

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667876#comment-15667876
 ] 

Xiaoyu Yao commented on HDFS-11108:
---

Thanks [~anu] for the update. I think we need to update the exception 
declaration of methods from XceiverServer to OzoneContainer to 
DatanodeStateMachine as they throw IOException instead of Exception. Also found 
that XceiverServer#stop() misses an exception declaration even though the 
Javadoc said so.

> Ozone: use containers with the state machine
> 
>
> Key: HDFS-11108
> URL: https://issues.apache.org/jira/browse/HDFS-11108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11108-HDFS-7240.001.patch, 
> HDFS-11108-HDFS-7240.002.patch
>
>
> Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11094:
---
Attachment: HDFS-11094.005.patch

Uploading new patch that doesn't touch NNHAStatusHeartbeat. Simplifies the 
patch and decreases the overall size. 

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667680#comment-15667680
 ] 

Xiaoyu Yao commented on HDFS-11140:
---

Thanks [~linyiqun] for the explanation. That makes sense to me. One more 
suggestion: we can format firstScanTime into human readable format with a 
simpler patch using apache common's FastDateFormat like below .

{code}
org.apache.commons.lang.time.FastDateFormat.getInstance().format(firstScanTime)
{code}


> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11094:
-
Target Version/s: 2.9.0, 3.0.0-alpha2

Thanks for updating the JIRA. I changed the target version branch-2.9+

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-15 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667618#comment-15667618
 ] 

Anu Engineer commented on HDFS-11081:
-

Test failures seem related to this patch. I will update the patch soon.


> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch, 
> HDFS-11081-HDFS-7240.003.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11108) Ozone: use containers with the state machine

2016-11-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11108:

Attachment: HDFS-11108-HDFS-7240.002.patch

Updated patch to address review comments.

The change related to exception is still needed because the ozoneContainer 
class throws exception. Addressed all other comments.

> Ozone: use containers with the state machine
> 
>
> Key: HDFS-11108
> URL: https://issues.apache.org/jira/browse/HDFS-11108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11108-HDFS-7240.001.patch, 
> HDFS-11108-HDFS-7240.002.patch
>
>
> Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-15 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667607#comment-15667607
 ] 

Sean Mackrory commented on HDFS-10702:
--

{quote}My concern is that if a significant portion of read requests follow this 
scenario (needs a fresher TxId), that will cause a high writeLock contention on 
SbNN.{quote}

Yes this certainly isn't for every scenario. I view this as being useful for 
offloading some workloads from the active NameNode. I was hoping to get some 
precise measurements of how this performed relative to other HA proxy methods 
for various workloads by now - but I actually found a bug where 
RequestHedgingProxyProvider was broadcasting more traffic than it needed to 
with > 2 NameNodes, so I'll need to revisit that.

{quote}In the case of multiple standbys, one is the checkpointer, thus you can 
consider allowing client to connect to standbys not doing checkpoint.{quote}

That's a good idea - I'd certainly like to make the logic for deciding which 
NameNodes are in standby more robust. Perhaps this should be included in the 
'SyncInfo' structure?

{quote}After NN failover, does StaleReadProxyProvider#standbyProxies get 
refreshed? If not, a long running client could keep using the old 
standby.{quote}

It does not. It will reevaluate which proxies to use in the event of a failure 
(specifically, a failure of the active NN when writing, or a failure of all 
standby NNs when reading). I had thought about that possibility and decided to 
ignore it for now. The worst that will happen is they won't be using the 
optimal NameNode and you lose the benefit of the optimization. I was fine with 
that since the very nature of this feature is accepting sub-optimal results 
within reasonable bounds. But we could possibly add in some ability to 
reevaluate after a certain time period or number of requests or something.

{quote}I am interested in knowing more how the applications plan to use it, 
specifically when they will decide to call getSyncInfo. In multi tenant 
environment, an application might care about specific files/directories, not 
necessarily the namespace has changed at a global level.{quote}

That's an interesting idea to explore and I think it fits with the use case I 
had in mind. I'm picturing cases where someone is going to be doing some 
(almost entirely) read-only analytics of a dataset that is known to be complete 
(or close enough). We can make the assumption that the metadata won't be 
changing, and either speed up our analysis or minimize the impact of our 
analysis on other workloads. In that case, I would think restricting the stale 
reads to a specific subtree is perfectly reasonable (if it helps - tailing the 
edit log was already implemented). I suppose this might be used by someone 
wanting to search the whole filesystem for something and is okay with 
approximating results. But I would think this is less common, and one could 
always set '/' as the subtree they're concerned with.

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667563#comment-15667563
 ] 

Eric Badger commented on HDFS-11094:


[~liuml07], I updated the subject and description.

bq. One concern is about backward compatibility if we change 
HeartbeatRequestProto.
I don't think HeartbeatRequestProto actually needs to be changed. I was trying 
to normalize some of the message types that were being used, but made the 
normalization from the side of what I was adding instead of what was there. 
I'll fix this and upload a new patch.

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10656) Optimize conversion of byte arrays back to path string

2016-11-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15665493#comment-15665493
 ] 

Xiao Chen edited comment on HDFS-10656 at 11/15/16 4:17 PM:


Thank you for the nice optimization [~daryn]!

Sorry for posting my late questions here:
- -It seems the new check on range doesn't cover the scenario that {{length}} < 
0. So {{length < 0 && offset + length >=0}} would be a valid input. Should we 
worry about this?- Edit: sorry false alarm, my bad.
- It seems the old behavior is to always return {{""}} if the byte array is 
0-length, without any input validation on {{offset}}/{{length}}. New behavior 
will throw {{IndexOutOfBoundsException}} from the precondition check.
I'm only asking from a code review perspective. And I guess this is okay since 
{{DFSUtil}} is private? Not sure why the old behavior is as such.



was (Author: xiaochen):
Thank you for the nice optimization [~daryn]!

Sorry for posting my late questions here:
- It seems the new check on range doesn't cover the scenario that {{length}} < 
0. So {{length < 0 && offset + length >=0}} would be a valid input. Should we 
worry about this?
- It seems the old behavior is to always return {{""}} if the byte array is 
0-length, without any input validation on {{offset}}/{{length}}. New behavior 
will throw {{IndexOutOfBoundsException}} from the precondition check.
I'm only asking from a code review perspective. And I guess this is okay since 
{{DFSUtil}} is private? Not sure why the old behavior is as such.


> Optimize conversion of byte arrays back to path string
> --
>
> Key: HDFS-10656
> URL: https://issues.apache.org/jira/browse/HDFS-10656
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10656.patch
>
>
> {{DFSUtil.byteArray2PathString}} generates excessive object allocation.
> # each byte array is encoded to a string (copy)
> # string appended to a builder which extracts the chars from the intermediate 
> string (copy) and adds to its own char array
> # builder's char array is re-alloced if over 16 chars (copy)
> # builder's toString creates another string (copy)
> Instead of allocating all these objects and performing multiple byte/char 
> encoding/decoding conversions, the byte array can be built in-place with a 
> single final conversion to a string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667474#comment-15667474
 ] 

Brahma Reddy Battula commented on HDFS-11134:
-

[~linyiqun] thanks for explanation.
Yes, that comment was misleading.+1 on option 1.. can you update the patch..? 
[~ajisakaa] what do you feel..?

> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11142:
-
Description: 
The test {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} 
fails in trunk. I looked into this, it seemed the long-time gc caused the 
datanode to be shutdown unexpectedly when did the large block reporting. And 
then the NPE threw in the test. The related output log:
{code}
2016-11-15 11:31:18,889 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPServiceActor.java:blockReport(415)) - Successfully sent block report 
0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 
49 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-11-15 11:31:18,890 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPOfferService.java:processCommandFromActive(696)) - Got finalize command for 
block pool BP-814229154-172.17.0.3-1479209475497
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4936ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4898ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
{code}
The stack infos:
{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
{code}


  was:
The test {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} 
fails in trunk. I looked into this, it seemed the long-time gc caused the 
datanode to be shutdown unexpectedly when did the large block reporting. And 
then the NPE thew in the test. The related output log:
{code}
2016-11-15 11:31:18,889 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPServiceActor.java:blockReport(415)) - Successfully sent block report 
0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 
49 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-11-15 11:31:18,890 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPOfferService.java:processCommandFromActive(696)) - Got finalize command for 
block pool BP-814229154-172.17.0.3-1479209475497
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4936ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4898ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down 

[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667468#comment-15667468
 ] 

Yiqun Lin commented on HDFS-11140:
--

The test {{TestLargeBlockReport}} fails two times, filed HDFS-11142 to track 
that.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667466#comment-15667466
 ] 

Yiqun Lin commented on HDFS-11142:
--

I am prepared to add the retry chances if the test fails. Will attach a patch 
soon. Also thanks for the other comments.

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-11142
> URL: https://issues.apache.org/jira/browse/HDFS-11142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: test-fails-log.txt
>
>
> The test 
> {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} fails 
> in trunk. I looked into this, it seemed the long-time gc caused the datanode 
> to be shutdown unexpectedly when did the large block reporting. And then the 
> NPE thew in the test. The related output log:
> {code}
> 2016-11-15 11:31:18,889 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(415)) - Successfully sent block report 
> 0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 49 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
> 2016-11-15 11:31:18,890 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(696)) - Got finalize command 
> for block pool BP-814229154-172.17.0.3-1479209475497
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4936ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4898ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
> {code}
> The stack infos:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11142:
-
Attachment: test-fails-log.txt

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-11142
> URL: https://issues.apache.org/jira/browse/HDFS-11142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: test-fails-log.txt
>
>
> The test 
> {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} fails 
> in trunk. I looked into this, it seemed the long-time gc caused the datanode 
> to be shutdown unexpectedly when did the large block reporting. And then the 
> NPE thew in the test. The related output log:
> {code}
> 2016-11-15 11:31:18,889 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(415)) - Successfully sent block report 
> 0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 49 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
> 2016-11-15 11:31:18,890 [DataNode: 
> [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
>  
> [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
>   heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(696)) - Got finalize command 
> for block pool BP-814229154-172.17.0.3-1479209475497
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4936ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,026 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4898ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
> 2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
> {code}
> The stack infos:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2016-11-15 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11142:


 Summary: 
TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk
 Key: HDFS-11142
 URL: https://issues.apache.org/jira/browse/HDFS-11142
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin


The test {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} 
fails in trunk. I looked into this, it seemed the long-time gc caused the 
datanode to be shutdown unexpectedly when did the large block reporting. And 
then the NPE thew in the test. The related output log:
{code}
2016-11-15 11:31:18,889 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPServiceActor.java:blockReport(415)) - Successfully sent block report 
0x2ae5dd91bec02273,  containing 2 storage report(s), of which we sent 2. The 
reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 
49 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-11-15 11:31:18,890 [DataNode: 
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
 
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]]
  heartbeating to localhost/127.0.0.1:51450] INFO  datanode.DataNode 
(BPOfferService.java:processCommandFromActive(696)) - Got finalize command for 
block pool BP-814229154-172.17.0.3-1479209475497
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4936ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,026 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO  
util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or 
host machine (eg GC): pause of approximately 4898ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms
GC pool 'PS Scavenge' had collection(s): count=1 time=765ms
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster
2016-11-15 11:31:24,114 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0
{code}
The stack infos:
{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667343#comment-15667343
 ] 

Yiqun Lin commented on HDFS-11134:
--

Thanks [~ajisakaa] for taking a look. This sentence looks confused. It can be 
understanding in two meanings:

1.Just restarting cluster can ensure the leases to be persisted in fsimage, the 
same namenode port is not a necessary condition.
2.Restarting cluster and meanwhile keeping the same namenode port can ensure 
the leased to be persisted in fsimage.

I prefer the first meaning. And if the leases have been persisted, it looks 
strange that we must keep the same port to read these leases. So I think this 
is not a necessary condition. This is just my thought. Thanks!

> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667339#comment-15667339
 ] 

Rakesh R commented on HDFS-10802:
-

Thanks [~yuanbo], overall the latest patch looks good. Once the following minor 
comments are addressed the patch is ready to go in.

# Please add {{@throws IOException}} to HdfsAdmin javadocs.
# Log message modification:
{code}
LOG.debug("Added block collection id " + id + " to satisfy list");
{code}
Please rephrase the message like,
{code}
LOG.debug("Added block collection id {} to block storageMovementNeeded queue", 
id);
{code}

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802-HDFS-10285.004.patch, HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-11-15 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11094:
---
Description: The datanode should know which NN is active when it is 
connecting/registering to the NN. Currently, it only figures this out during 
its first (and subsequent) heartbeat(s) and so there is a period of time where 
the datanode is alive and registered, but can't actually do anything because it 
doesn't know which NN is active. A byproduct of this is that the MiniDFSCluster 
will become active before it knows what NN is active, which can lead to NPEs 
when calling getActiveNN().   (was: {noformat}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96)
{noformat})
Summary: Send back HAState along with NamespaceInfo during a 
versionRequest as an optional parameter  (was: TestLargeBlockReport fails 
intermittently)

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667212#comment-15667212
 ] 

Hadoop QA commented on HDFS-10802:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} HDFS-10285 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838961/HDFS-10802-HDFS-10285.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 43bf9748ae10 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 5686f56 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17572/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667014#comment-15667014
 ] 

Hadoop QA commented on HDFS-11140:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Call to method of static java.text.DateFormat in 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.start()  At 
DirectoryScanner.java:java.text.DateFormat in 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.start()  At 
DirectoryScanner.java:[line 277] |
| Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838948/HDFS-11140.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9095752092b 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7ffb994 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17571/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17571/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10802:
--
Attachment: HDFS-10802-HDFS-10285.004.patch

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802-HDFS-10285.004.patch, HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15666973#comment-15666973
 ] 

Yuanbo Liu commented on HDFS-10802:
---

[~rakeshr] Thanks for your response.
upload v3 patch to address your comment.

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15666973#comment-15666973
 ] 

Yuanbo Liu edited comment on HDFS-10802 at 11/15/16 12:06 PM:
--

[~rakeshr] Thanks for your response.
upload v4 patch to address your comment.


was (Author: yuanbo):
[~rakeshr] Thanks for your response.
upload v3 patch to address your comment.

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
> Attachments: HDFS-10802-HDFS-10285.001.patch, 
> HDFS-10802-HDFS-10285.002.patch, HDFS-10802-HDFS-10285.003.patch, 
> HDFS-10802.001.patch, editsStored
>
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11140:
-
Attachment: HDFS-11140.001.patch

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11140:
-
Attachment: (was: HDFS-11140.001.patch)

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11140) Directory Scanner should log startup message time correctly

2016-11-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15666820#comment-15666820
 ] 

Yiqun Lin commented on HDFS-11140:
--

Thanks [~xyao] for the comments.
{quote}
How about removing the starting timestamp as the log4j always has that info at 
the beginning of each log entry?
{quote}
Here the time of log4j info is not the real start time of DirectoryScanner. 
There is an additional offset time that randoming from 0 to {{scanPeriod}} that 
will be increased to the start time.

Attach the new patch to fix checkstyle warnings.

> Directory Scanner should log startup message time correctly
> ---
>
> Key: HDFS-11140
> URL: https://issues.apache.org/jira/browse/HDFS-11140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11140.001.patch
>
>
> When DirectoryScanner is enabled, one can see the following log:
> {noformat}
> INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic 
> Directory Tree Verification scan starting at 1479189895562ms with interval of 
> 2160ms
> {noformat}
> The first is epoch time and should not have 'ms'. Or better yet, we can 
> change it to a human-readable format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11134) Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock

2016-11-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1596#comment-1596
 ] 

Akira Ajisaka commented on HDFS-11134:
--

{code}
  // restart cluster with the same namenode port as before.
  // This ensures that leases are persisted in fsimage.
{code}
If this sentence is correct, we need to specify the same port. Hi [~linyiqun] 
and [~brahmareddy], what do you think?
If this is true, I'm thinking it's better to retry the test when BindException 
occurs.

> Fix bind exceptions in TestRenameWhileOpen and TestPendingInvalidateBlock
> -
>
> Key: HDFS-11134
> URL: https://issues.apache.org/jira/browse/HDFS-11134
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11134.001.patch
>
>
> I found two bind exceptions causing the unit tests fails in history Jenkins 
> buildings. One is test {{TestRenameWhileOpen}}, the other one is 
> {{TestPendingInvalidateBlock}}.
> Here are the stack infos of {{TestRenameWhileOpen}}(I can't find the stack 
> infos of {{TestPendingInvalidateBlock}} now since the time of that happened 
> is too early, but I'm sure it has failed due to bind exception.)
> {code}
> java.net.BindException: Problem binding to [localhost:42155] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)
> {code}
> Here specifying the namenode port is not necessary, is similar to HDFS-11129. 
> And I have run this test many times in my local, it all passed. The same we 
> shuld do for test {{TestPendingInvalidateBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1543#comment-1543
 ] 

Brahma Reddy Battula commented on HDFS-11087:
-

Looks to be {{CHANGES.txt}} not updated for this. can you update..?

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.9.0, 2.7.4
>
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087-branch-2.001.patch, HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11141) [viewfs] Listfile gives complete Realm as User

2016-11-15 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-11141:
---

Assignee: Brahma Reddy Battula

> [viewfs] Listfile gives complete Realm as User
> --
>
> Key: HDFS-11141
> URL: https://issues.apache.org/jira/browse/HDFS-11141
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
>
> When defaultFS is configured as viewfs --
> fs.defaultFS
> viewfs://CLUSTER/
> List Files showing Realm as User  --
> hdfs dfs -ls /
> Found 2 items
> -r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 
> 2016-11-07 15:31 /Dir1
> -r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 
> 2016-11-07 15:31 /Dir2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11141) [viewfs] Listfile gives complete Realm as User

2016-11-15 Thread Archana T (JIRA)
Archana T created HDFS-11141:


 Summary: [viewfs] Listfile gives complete Realm as User
 Key: HDFS-11141
 URL: https://issues.apache.org/jira/browse/HDFS-11141
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Archana T
Priority: Minor



When defaultFS is configured as viewfs --

fs.defaultFS
viewfs://CLUSTER/

List Files showing Realm as User  --
hdfs dfs -ls /
Found 2 items
-r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 2016-11-07 
15:31 /Dir1
-r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 2016-11-07 
15:31 /Dir2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >