[jira] [Updated] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-03-05 Thread Yuriy Malygin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuriy Malygin updated HDFS-13236:
-
Description: 
After update Hadoop from 2.7.3 to 3.0.0 standby NN down with error encountered 
while tailing edits from JN:
{code:java}
Feb 28 01:58:31 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:31,594 INFO 
[FSImageSaver for /one/hadoop-data/dfs of type IMAGE_AND_EDITS] 
FSImageFormatProtobuf - Image file 
/one/hadoop-data/dfs/current/fsimage.ckpt_012748979
98 of size 4595971949 bytes saved in 93 seconds.
Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 INFO 
[Standby State Checkpointer] NNStorageRetentionManager - Going to retain 2 
images with txid >= 1274897935
Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 INFO 
[Standby State Checkpointer] NNStorageRetentionManager - Purging old image 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_01274897875, 
cpktTxId
=01274897875)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 INFO 
[Edit log tailer] FSImage - Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@6a168e6f 
expecting start txid #1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 INFO 
[Edit log tailer] FSImage - Start loading edits file 
http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A10
56233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true, 
http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848;
inProgressOk=true
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 INFO 
[Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999
torageInfo=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true,
 
http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217
-aef5-6ed206893848=true' to transaction ID 1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 INFO 
[Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true'
 to transaction ID 1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,680 ERROR 
[Edit log tailer] FSEditLogLoader - Encountered exception on operation AddOp 
[length=0, inodeId=145550319, 
path=/kafka/parquet/infrastructureGrace/date=2018-02-28/_temporary/1/_temporary/attempt_1516181147167_20856_r_98_0/part-r-00098.gz.parquet,
 replication=3, mtime=1519772206615, atime=1519772206615, blockSize=134217728, 
blocks=[], permissions=root:supergroup:rw-r--r--, aclEntries=null, 
clientName=DFSClient_attempt_1516181147167_20856_r_98_0_1523538799_1, 
clientMachine=10.137.2.142, overwrite=false, RpcClientId=, RpcCallId=271996603, 
storagePolicyId=0, erasureCodingPolicyId=0, opCode=OP_ADD, txid=1274898002]
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 
java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
length 16
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:946)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 

[jira] [Updated] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-03-05 Thread Yuriy Malygin (JIRA)
807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:598)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:558)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:392)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:268)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:121)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:137)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2093)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:287)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2602)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:864)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:549)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
java.security.AccessController.doPrivileged(Native Method)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
javax.security.auth.Subject.doAs(Subject.java:422)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
{code}
{code:java}
[ph@srvd2711 srvg671]$ zgrep -c NotEnoughReplicasException 
archive/odnoklassniki-namenode.log-20180{22*,3*}.gz
archive/odnoklassniki-namenode.log-20180220.gz:0
archive/odnoklassniki-namenode.log-20180221.gz:0
archive/odnoklassniki-namenode.log-20180222.gz:0
archive/odnoklassniki-namenode.log-20180223.gz:0
archive/odnoklassniki-namenode.log-20180224.gz:0
archive/odnoklassniki-namenode.log-20180225.gz:0
archive/odnoklassniki-namenode.log-20180226.gz:0
archive/odnoklassniki-namenode.log-20180227.gz:0
archive/odnoklassniki-namenode.log-20180228.gz:0
[ph@srvd2711 srvg671]$ zgrep -c NotEnoughReplicasException 
archive/odnoklassniki-namenode.log-201803*.gz
archive/odnoklassniki-namenode.log-20180301.gz:173492
archive/odnoklassniki-namenode.log-20180302.gz:153192
archive/odnoklassniki-namenode.log-20180303.gz:0
archive/odnoklassniki-namenode.log-20180304.gz:0
archive/odnoklassniki-namenode.log-20180305.gz:0
{code}
In JN logs:
{code:java}
 {code}
{code:java}
Feb 28 01:57:51 srvd87 datalab-journalnode[29960]: 2018-02-28 01:57:51,552 INFO 
[IPC Server handler 4 on 8485] FileJournalManager - Finalizing edits file 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_inprogress_0
1274897999 -> 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_01274897999-01274898515
Feb 28 01:58:34 srvd87 datalab-journalnode[29960]: 2018-02-28 01:58:34,671 INFO 
[qtp414690789-164] TransferFsImage - Sending fileName: 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_01274897999-01274898515,
 fileSize: 80772. Sent total: 80772 bytes. Size of last segment intended to 
send: -1 bytes.
{code}
 

  was:
After update Hadoop from 2.7.3 to 3.0.0 stan

[jira] [Updated] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-03-05 Thread Yuriy Malygin (JIRA)
1 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:137)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2093)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:287)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2602)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:864)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:549)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
java.security.AccessController.doPrivileged(Native Method)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
javax.security.auth.Subject.doAs(Subject.java:422)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
Feb 28 01:57:20 srvg671 datalab-namenode[1807]: at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
{code}
{code:java}
[ph@srvd2711 srvg671]$ zgrep -c NotEnoughReplicasException 
archive/odnoklassniki-namenode.log-20180{22*,3*}.gz
archive/odnoklassniki-namenode.log-20180220.gz:0
archive/odnoklassniki-namenode.log-20180221.gz:0
archive/odnoklassniki-namenode.log-20180222.gz:0
archive/odnoklassniki-namenode.log-20180223.gz:0
archive/odnoklassniki-namenode.log-20180224.gz:0
archive/odnoklassniki-namenode.log-20180225.gz:0
archive/odnoklassniki-namenode.log-20180226.gz:0
archive/odnoklassniki-namenode.log-20180227.gz:0
archive/odnoklassniki-namenode.log-20180228.gz:0
[ph@srvd2711 srvg671]$ zgrep -c NotEnoughReplicasException 
archive/odnoklassniki-namenode.log-201803*.gz
archive/odnoklassniki-namenode.log-20180301.gz:173492
archive/odnoklassniki-namenode.log-20180302.gz:153192
archive/odnoklassniki-namenode.log-20180303.gz:0
archive/odnoklassniki-namenode.log-20180304.gz:0
archive/odnoklassniki-namenode.log-20180305.gz:0
{code}
In JN logs:
{code:java}
 {code}
{code:java}
Feb 28 01:57:51 srvd87 datalab-journalnode[29960]: 2018-02-28 01:57:51,552 INFO 
[IPC Server handler 4 on 8485] FileJournalManager - Finalizing edits file 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_inprogress_0
1274897999 -> 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_01274897999-01274898515
Feb 28 01:58:34 srvd87 datalab-journalnode[29960]: 2018-02-28 01:58:34,671 INFO 
[qtp414690789-164] TransferFsImage - Sending fileName: 
/one/hadoop-data/journal/datalab-hadoop-backup/current/edits_01274897999-01274898515,
 fileSize: 80772. Sent total: 80772 bytes. Size of last segment intended to 
send: -1 bytes.
{code}
 


> Standby NN down with error encountered while tailing edits
> --
>
> Key: HDFS-13236
> URL: https://issues.apache.org/jira/browse/HDFS-13236
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 3.0.0
>Reporter: Yuriy Malygin
>Priority: Major
>
> After update Hadoop from 2.7.3 to 3.0.0 standby NN down with error 
> encountered while tailing edits from JN:
> {code:java}
> Feb 28 01:58:31 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:31,594 
> INFO [FSImageSaver for /one/hadoop-data/dfs of type IMAGE_AND_EDITS] 
> FSImageFormatProtobuf - Image file 
> /one/hadoop-data/dfs/current/fsimage.ckpt_012748979
> 98 of size 4595971949 bytes saved in 93 seconds.
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standby State Checkpointer] NNStorageRetentionManager - Going to retain 
> 2 images with txid >= 1274897935
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standb

[jira] [Created] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-03-05 Thread Yuriy Malygin (JIRA)
Yuriy Malygin created HDFS-13236:


 Summary: Standby NN down with error encountered while tailing edits
 Key: HDFS-13236
 URL: https://issues.apache.org/jira/browse/HDFS-13236
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node, namenode
Affects Versions: 3.0.0
Reporter: Yuriy Malygin


After update Hadoop from 2.7.3 to 3.0.0 standby NN down with error encountered 
while tailing edits from JN:
{code:java}
Feb 28 01:58:31 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:31,594 INFO 
[FSImageSaver for /one/hadoop-data/dfs of type IMAGE_AND_EDITS] 
FSImageFormatProtobuf - Image file 
/one/hadoop-data/dfs/current/fsimage.ckpt_012748979
98 of size 4595971949 bytes saved in 93 seconds.
Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 INFO 
[Standby State Checkpointer] NNStorageRetentionManager - Going to retain 2 
images with txid >= 1274897935
Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 INFO 
[Standby State Checkpointer] NNStorageRetentionManager - Purging old image 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_01274897875, 
cpktTxId
=01274897875)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 INFO 
[Edit log tailer] FSImage - Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@6a168e6f 
expecting start txid #1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 INFO 
[Edit log tailer] FSImage - Start loading edits file 
http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A10
56233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true, 
http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848;
inProgressOk=true
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 INFO 
[Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999
torageInfo=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true,
 
http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217
-aef5-6ed206893848=true' to transaction ID 1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 INFO 
[Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true'
 to transaction ID 1274897999
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,680 ERROR 
[Edit log tailer] FSEditLogLoader - Encountered exception on operation AddOp 
[length=0, inodeId=145550319, 
path=/kafka/parquet/infrastructureGrace/date=2018-02-28/_temporary/1/_temporary/attempt_1516181147167_20856_r_98_0/part-r-00098.gz.parquet,
 replication=3, mtime=1519772206615, atime=1519772206615, blockSize=134217728, 
blocks=[], permissions=root:supergroup:rw-r--r--, aclEntries=null, 
clientName=DFSClient_attempt_1516181147167_20856_r_98_0_1523538799_1, 
clientMachine=10.137.2.142, overwrite=false, RpcClientId=, RpcCallId=271996603, 
storagePolicyId=0, erasureCodingPolicyId=0, opCode=OP_ADD, txid=1274898002]
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 
java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
length 16
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:946)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882)
Feb 28 01:58:34 

[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387390#comment-16387390
 ] 

genericqa commented on HDFS-13222:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 626 unchanged - 3 fixed = 626 total (was 629) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.federation.router.TestRouterSafemode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13222 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913146/HDFS-13222.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 6a0644e46cda 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23313/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12654) APPEND API call is different in HTTPFS and NameNode REST

2018-03-05 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387350#comment-16387350
 ] 

SammiChen commented on HDFS-12654:
--

Hi,  [~Nuke] and [~iwasakims], seem it's not a issue after the further 
investigation. Can it be closed? 

> APPEND API call is different in HTTPFS and NameNode REST
> 
>
> Key: HDFS-12654
> URL: https://issues.apache.org/jira/browse/HDFS-12654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs, namenode
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 3.0.0-beta1
>Reporter: Andras Czesznak
>Priority: Major
>
> The APPEND REST API call behaves differently in the NameNode REST and the 
> HTTPFS codes. The NameNode version creates the target file the new data being 
> appended to if it does not exist at the time of the call issued. The HTTPFS 
> version assumes the target file exists when APPEND is called and can append 
> only the new data but does not create the target file it doesn't exist.
> The two implementations should be standardized, preferably the HTTPFS version 
> should be modified to execute an implicit CREATE if the target file does not 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387345#comment-16387345
 ] 

maobaolong commented on HDFS-13233:
---

[~striver.wang] make sense. 

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13235) DiskBalancer: Update Documentation to add newly added options

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387344#comment-16387344
 ] 

genericqa commented on HDFS-13235:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13235 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913152/HDFS-13235.00.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 3a5b48a58f4c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23315/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: Update Documentation to add newly added options
> -
>
> Key: HDFS-13235
> URL: https://issues.apache.org/jira/browse/HDFS-13235
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13235.00.patch
>
>
> HDFS-13181 added dfs.disk.balancer.plan.valid.interval
> HDFS-13178 added skipDateCheck Option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387342#comment-16387342
 ] 

wangzhiyuan edited comment on HDFS-13233 at 3/6/18 6:13 AM:


[~elgoiri] New patch has been uploaded.

Actually I think below function is OK:
{code:java}
  private boolean isParentEntry(final String path, final String parent) {
if (!path.startsWith(parent)) {
  return false;
}
if (path.equals(parent)) {
  return true;
}
return path.charAt(parent.length()) == '/';
  }
{code}

 path.startsWith(parent)  could guarantee path.length() >= parent.length()

!path.equals(parent) could guarantee path.length() != parent.length()

So path.charAt(parent.length())  is safe here.


was (Author: striver.wang):
[~elgoiri] New patch has been uploaded.

Actually I think below function is OK:

```java
 private boolean isParentEntry(final String path, final String parent) { if 
(!path.startsWith(parent))

{ return false; }

if (path.equals(parent))

{ return true; }

return path.charAt(parent.length()) == '/';
 }

```
 path.startsWith(parent)  could guarantee path.length() >= parent.length()

!path.equals(parent) could guarantee path.length() != parent.length()

So path.charAt(parent.length())  is safe here.

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387342#comment-16387342
 ] 

wangzhiyuan edited comment on HDFS-13233 at 3/6/18 6:12 AM:


[~elgoiri] New patch has been uploaded.

Actually I think below function is OK:

```java
 private boolean isParentEntry(final String path, final String parent) { if 
(!path.startsWith(parent))

{ return false; }

if (path.equals(parent))

{ return true; }

return path.charAt(parent.length()) == '/';
 }

```
 path.startsWith(parent)  could guarantee path.length() >= parent.length()

!path.equals(parent) could guarantee path.length() != parent.length()

So path.charAt(parent.length())  is safe here.


was (Author: striver.wang):
[~elgoiri] New patch has been uploaded.

Actually I think below function is OK:
private boolean isParentEntry(final String path, final String parent) {  if 
(!path.startsWith(parent)) {return false;
  }  if (path.equals(parent)) {return true;
  }  return path.charAt(parent.length()) == '/';
}
path.startsWith(parent)  could guarantee path.length() >= parent.length()

!path.equals(parent) could guarantee path.length() != parent.length()

So path.charAt(parent.length())  is safe here.

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387342#comment-16387342
 ] 

wangzhiyuan commented on HDFS-13233:


[~elgoiri] New patch has been uploaded.

Actually I think below function is OK:
private boolean isParentEntry(final String path, final String parent) {  if 
(!path.startsWith(parent)) {return false;
  }  if (path.equals(parent)) {return true;
  }  return path.charAt(parent.length()) == '/';
}
path.startsWith(parent)  could guarantee path.length() >= parent.length()

!path.equals(parent) could guarantee path.length() != parent.length()

So path.charAt(parent.length())  is safe here.

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Attachment: HDFS-13233.002.patch

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387329#comment-16387329
 ] 

genericqa commented on HDFS-13214:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13214 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913144/HDFS-13214.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5e57c53a3ea2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23312/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23312/testReport/ |
| Max. process+thread count | 4220 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387322#comment-16387322
 ] 

genericqa commented on HDFS-13233:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.federation.resolver.TestMountTableResolver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13233 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913137/HDFS-13233.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c4ef5344e7b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23310/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-03-05 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12975:

Attachment: HDFS-12975.003.patch

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch, 
> HDFS-12975.002.patch, HDFS-12975.003.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-03-05 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387309#comment-16387309
 ] 

Chao Sun commented on HDFS-12975:
-

Thanks Konstantin. Yes I agree that option 2) seems not necessary. Regarding 
transitions, do you think we should support bi-directional transition between 
standby and observer? how about transition between active and observer?

Also, with this setup I'm not sure whether we need the automatic failover from 
observer to active, in case standby is not available. IMHO, most cases one 
active and one standby works pretty well, so as long as we guarantee there's at 
least one standby it should be OK?

Also thanks for the comments. Will upload a new patch soon.

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch, 
> HDFS-12975.002.patch, HDFS-12975.003.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13235) DiskBalancer: Update Documentation to add newly added options

2018-03-05 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13235:
--
Attachment: HDFS-13235.00.patch

> DiskBalancer: Update Documentation to add newly added options
> -
>
> Key: HDFS-13235
> URL: https://issues.apache.org/jira/browse/HDFS-13235
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13235.00.patch
>
>
> HDFS-13181 added dfs.disk.balancer.plan.valid.interval
> HDFS-13178 added skipDateCheck Option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13235) DiskBalancer: Update Documentation to add newly added options

2018-03-05 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13235:
--
Status: Patch Available  (was: Open)

> DiskBalancer: Update Documentation to add newly added options
> -
>
> Key: HDFS-13235
> URL: https://issues.apache.org/jira/browse/HDFS-13235
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13235.00.patch
>
>
> HDFS-13181 added dfs.disk.balancer.plan.valid.interval
> HDFS-13178 added skipDateCheck Option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387305#comment-16387305
 ] 

Yiqun Lin commented on HDFS-13226:
--

Hi [~maobaolong],
{quote}If we change the log error to throw IOException to user, is it make 
sense?
{quote}
I think we don't need to do this. It's okay to throw exception after when we 
detect the invalid mount table that inputted.

Two comments for the initial patch:

1.
{code:java}
throw new IllegalArgumentException("Invalid entry. see more detail from log.");
{code}
Would you update the error message to {{Invalid entry created.}}
 {{see more detail from log}} seems unnecessary.

2.Please add a simple test in {{TestRouterAdminCLI}} as [~elgoiri] also 
mentioned..

I can provide the review, :).

> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387302#comment-16387302
 ] 

genericqa commented on HDFS-13212:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13212 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913140/HDFS-13212-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bbf1fb23f4f7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23311/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23311/testReport/ |
| Max. process+thread count | 3912 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23311/console |
| 

[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387299#comment-16387299
 ] 

Xiao Chen commented on HDFS-13170:
--

Hi [~sodonnell],

Thanks for the new patch. I'm doing a final pass and found there's one more 
multi-line syntax thing in FSOperations.java. Also, its {{FSMkdirs}} method 
javadoc is missing the new parameter. Sorry for the nitpicks, +1 pending.

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13235) DiskBalancer: Update Documentation to add newly added options

2018-03-05 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13235:
--
Description: 
HDFS-13181 added dfs.disk.balancer.plan.valid.interval

HDFS-13178 added skipDateCheck Option

> DiskBalancer: Update Documentation to add newly added options
> -
>
> Key: HDFS-13235
> URL: https://issues.apache.org/jira/browse/HDFS-13235
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDFS-13181 added dfs.disk.balancer.plan.valid.interval
> HDFS-13178 added skipDateCheck Option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13178) Disk Balancer: Add skipDateCheck option to DiskBalancer Execute command

2018-03-05 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13178:
--
Summary: Disk Balancer: Add skipDateCheck option to DiskBalancer Execute 
command  (was: Disk Balancer: Add a force option to DiskBalancer Execute 
command)

> Disk Balancer: Add skipDateCheck option to DiskBalancer Execute command
> ---
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch, 
> HDFS-13178.02.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13235) DiskBalancer: Update Documentation to add newly added options

2018-03-05 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13235:
-

 Summary: DiskBalancer: Update Documentation to add newly added 
options
 Key: HDFS-13235
 URL: https://issues.apache.org/jira/browse/HDFS-13235
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387291#comment-16387291
 ] 

genericqa commented on HDFS-13056:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} root: The patch generated 0 new + 611 unchanged - 1 
fixed = 611 total (was 612) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  3s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 37s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}234m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Created] (HDFS-13234) Remove renew configuration instance in ConfiguredFailoverProxyProvider and reduce memory footprint for client

2018-03-05 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HDFS-13234:
--

 Summary: Remove renew configuration instance in 
ConfiguredFailoverProxyProvider and reduce memory footprint for client
 Key: HDFS-13234
 URL: https://issues.apache.org/jira/browse/HDFS-13234
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs, ha, hdfs-client
Reporter: He Xiaoqiao


The memory footprint of #DFSClient is very considerable in some special 
scenario since there are many #Configuration instances and occupy much memory 
resource (In an extreme case, org.apache.hadoop.conf.Configuration occupies 
over 600MB we meet under HDFS Federation an HA with QJM and there are dozens of 
NameNodes). I think some new Configuration instance is not necessary. Such as  
#ConfiguredFailoverProxyProvider initialization.

{code:java}
  public ConfiguredFailoverProxyProvider(Configuration conf, URI uri,
  Class xface, HAProxyFactory factory) {
this.xface = xface;
this.conf = new Configuration(conf);
..
  }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured

2018-03-05 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387268#comment-16387268
 ] 

Surendra Singh Lilhore commented on HDFS-12614:
---

This issue is applicable for 2.8.x branch also. Can we backport it ?

> FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider 
> configured
> --
>
> Key: HDFS-12614
> URL: https://issues.apache.org/jira/browse/HDFS-12614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HDFS-12614.01.patch, HDFS-12614.02.patch, 
> HDFS-12614.03.patch, HDFS-12614.04.patch, HDFS-12614.test.01.patch
>
>
> When INodeAttributesProvider is configured, and when resolving path (like 
> "/") and checking for permission, the following code when working on 
> {{pathByNameArr}} throws NullPointerException. 
> {noformat}
>   private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx,
>   INode inode, int snapshotId) {
> INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
> if (getAttributesProvider() != null) {
>   String[] elements = new String[pathIdx + 1];
>   for (int i = 0; i < elements.length; i++) {
> elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);  <===
>   }
>   inodeAttrs = getAttributesProvider().getAttributes(elements, 
> inodeAttrs);
> }
> return inodeAttrs;
>   }
> {noformat}
> Looks like for paths like "/" where the split components based on delimiter 
> "/" can be null, the pathByNameArr array can have null elements and can throw 
> NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13222:
--
Attachment: HDFS-13222.02.patch

> Update getBlocks method to take minBlockSize in RPC calls
> -
>
> Key: HDFS-13222
> URL: https://issues.apache.org/jira/browse/HDFS-13222
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, 
> HDFS-13222.02.patch
>
>
>  
> getBlocks Using balancer parameter is done in this Jira HDFS-9412
>  
> Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. 
> as [~szetszwo] suggested
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387267#comment-16387267
 ] 

Bharat Viswanadham commented on HDFS-13222:
---

Thank You [~szetszwo] for review.

Attached v02 patch to fix checkstyle issues.

 

> Update getBlocks method to take minBlockSize in RPC calls
> -
>
> Key: HDFS-13222
> URL: https://issues.apache.org/jira/browse/HDFS-13222
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, 
> HDFS-13222.02.patch
>
>
>  
> getBlocks Using balancer parameter is done in this Jira HDFS-9412
>  
> Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. 
> as [~szetszwo] suggested
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13188) Disk Balancer: Support multiple block pools during block move

2018-03-05 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387260#comment-16387260
 ] 

Bharat Viswanadham commented on HDFS-13188:
---

[~elgoiri]

Could you help in committing the patch?

 

> Disk Balancer: Support multiple block pools during block move
> -
>
> Key: HDFS-13188
> URL: https://issues.apache.org/jira/browse/HDFS-13188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13188.01.patch, HDFS-13188.02.patch, 
> HDFS-13188.03.patch, HDFS-13188.04.patch, HDFS-13188.05.patch
>
>
> During execute plan:
> *Federated setup:*
> When multiple block pools are there, it will only copy from blocks from first 
> block pool to destination disk, when balancing.
> We want to distribute the blocks from all block pools on source disk to 
> destination disk during balancing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-03-05 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387250#comment-16387250
 ] 

maobaolong commented on HDFS-12615:
---

[~elgoiri] Should us impl a StateStore of mysql?

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-05 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387249#comment-16387249
 ] 

maobaolong commented on HDFS-13204:
---

[~elgoiri] Okay, may be we can have more choice about 
federationdfshealth-router-* icon, we can choose the icon from bootstrap.

[~liuhongtong] Would you like to show your icon plan?

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-05 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387244#comment-16387244
 ] 

maobaolong commented on HDFS-13195:
---

[~bharatviswa] [~wheat9] Please take a look at this patch

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-05 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387242#comment-16387242
 ] 

maobaolong commented on HDFS-13226:
---

[~elgoiri] If we change the log error to throw IOException to user, is it make 
sense?

> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387221#comment-16387221
 ] 

wangzhiyuan commented on HDFS-13233:


[~elgoiri]  Thanks for your quick response, will fix it as your require.

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387211#comment-16387211
 ] 

Yiqun Lin edited comment on HDFS-13214 at 3/6/18 3:27 AM:
--

Attach the updated patch to update the doc.


was (Author: linyiqun):
Attach the updated to update the doc.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387211#comment-16387211
 ] 

Yiqun Lin commented on HDFS-13214:
--

Attach the updated to update the doc.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-05 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13214:
-
Attachment: HDFS-13214.004.patch

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387209#comment-16387209
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13222:


[~shv], the 00 patch was removing the check from Balance (since NN already has 
checked it).   In the 01 patch, the minSize is passed from Balancer to NN.   
Please see if you want to take a look

> Also we do not really distinguish between NN and Balancer conigs, right?

We should.  We can use different Balancer conf in each run.  If NN uses the 
Balancer conf, it is harder to change the conf values.  E.g. we may try a 
larger minSize in the first run and decrease the value in the following runs.

> Update getBlocks method to take minBlockSize in RPC calls
> -
>
> Key: HDFS-13222
> URL: https://issues.apache.org/jira/browse/HDFS-13222
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch
>
>
>  
> getBlocks Using balancer parameter is done in this Jira HDFS-9412
>  
> Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. 
> as [~szetszwo] suggested
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387205#comment-16387205
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13222:


Patch looks good.  [~bharatviswa], please address the checkstyple issues.


> Update getBlocks method to take minBlockSize in RPC calls
> -
>
> Key: HDFS-13222
> URL: https://issues.apache.org/jira/browse/HDFS-13222
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch
>
>
>  
> getBlocks Using balancer parameter is done in this Jira HDFS-9412
>  
> Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. 
> as [~szetszwo] suggested
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387202#comment-16387202
 ] 

Íñigo Goiri commented on HDFS-13233:


Thanks [~striver.wang] for the report and the fix.
I would probably make the full condition a private function for readability.
Please double check as I haven't gone through it carefully but I would do 
something like:
{code}
while (entry != null && isParentEntry(path, entry.getKey())) {
...

private boolean isParentEntry(final String path, final String parent) {
  if (!path.startsWith(parent)) {
return false;
  }
  if (path.equals(parent)) {
return true;
  }
  return path.charAt(entry.getKey().length()) == '/';
}
{code}

I would also add checks to avoid having the catch for 
{{StringIndexOutOfBoundsException}}.

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13008:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Ozone: Add DN container open/close state to container report
> 
>
> Key: HDFS-13008
> URL: https://issues.apache.org/jira/browse/HDFS-13008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13008-HDFS-7240.001.patch, 
> HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, 
> HDFS-13008-HDFS-7240.004.patch, HDFS-13008-HDFS-7240.005.patch
>
>
> HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
> ticket is opened to add the DN container close state to container report so 
> that SCM container state manager can update state from closing to closed when 
> DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387201#comment-16387201
 ] 

Nanda kumar commented on HDFS-13008:


I have committed this to the feature branch. Thanks [~xyao] for the 
contribution.

> Ozone: Add DN container open/close state to container report
> 
>
> Key: HDFS-13008
> URL: https://issues.apache.org/jira/browse/HDFS-13008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13008-HDFS-7240.001.patch, 
> HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, 
> HDFS-13008-HDFS-7240.004.patch, HDFS-13008-HDFS-7240.005.patch
>
>
> HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
> ticket is opened to add the DN container close state to container report so 
> that SCM container state manager can update state from closing to closed when 
> DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387195#comment-16387195
 ] 

Nanda kumar commented on HDFS-13008:


Thanks [~xyao] for updating the patch. LGTM, +1 on patch v005.
I will commit this shortly

> Ozone: Add DN container open/close state to container report
> 
>
> Key: HDFS-13008
> URL: https://issues.apache.org/jira/browse/HDFS-13008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13008-HDFS-7240.001.patch, 
> HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, 
> HDFS-13008-HDFS-7240.004.patch, HDFS-13008-HDFS-7240.005.patch
>
>
> HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
> ticket is opened to add the DN container close state to container report so 
> that SCM container state manager can update state from closing to closed when 
> DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387192#comment-16387192
 ] 

Íñigo Goiri commented on HDFS-13212:


Thanks [~wuweiwei] for  [^HDFS-13212-003.patch].
For readability in {{TestMountTableResolver}}, I wouldn't reuse entry and map.
Let's see what Yetus says anyway.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Description: 
Method MountTableResolver->getMountPoint will traverse mount table and return 
the first mount point which the input path starts with, but the condition is 
not sufficient.

Suppose the mount point table contains: "/user" "/user/test" "/user/test1", if 
the input path is "/user/test111", the return mount point is "/user/test1", but 
the correct one should be "/user".

  was:Suppose the mount point table include: "/" "/user"  "/user/test" 
"/user/test1", if the input path is "/user/test111", the return mount point of 
(MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
should be "/user". 


> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Summary: RBF:getMountPoint doesn't return the correct mount point of the 
mount table  (was: RBF:getMountPoint return wrong mount point if the input path 
starts with the mount point)

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Suppose the mount point table include: "/" "/user"  "/user/test" 
> "/user/test1", if the input path is "/user/test111", the return mount point 
> of (MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
> should be "/user". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-03-05 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13212:
-
Attachment: HDFS-13212-003.patch

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF:getMountPoint return wrong mount point if the input path starts with the mount point

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Summary: RBF:getMountPoint return wrong mount point if the input path 
starts with the mount point  (was: RBF:getMountPoint return wrong mount point 
if the input path contains the mount point)

> RBF:getMountPoint return wrong mount point if the input path starts with the 
> mount point
> 
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Suppose the mount point table include: "/" "/user"  "/user/test" 
> "/user/test1", if the input path is "/user/test111", the return mount point 
> of (MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
> should be "/user". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387184#comment-16387184
 ] 

genericqa commented on HDFS-13008:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:215a942 |
| JIRA Issue | HDFS-13008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913123/HDFS-13008-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux f9426a832bdd 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 

[jira] [Updated] (HDFS-13233) RBF:getMountPoint return wrong mount point if the input path contains the mount point

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Attachment: HDFS-13233.001.patch

> RBF:getMountPoint return wrong mount point if the input path contains the 
> mount point
> -
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch
>
>
> Suppose the mount point table include: "/" "/user"  "/user/test" 
> "/user/test1", if the input path is "/user/test111", the return mount point 
> of (MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
> should be "/user". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF:getMountPoint return wrong mount point if the input path contains the mount point

2018-03-05 Thread wangzhiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhiyuan updated HDFS-13233:
---
Status: Patch Available  (was: Open)

> RBF:getMountPoint return wrong mount point if the input path contains the 
> mount point
> -
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
>
> Suppose the mount point table include: "/" "/user"  "/user/test" 
> "/user/test1", if the input path is "/user/test111", the return mount point 
> of (MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
> should be "/user". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2018-03-05 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387171#comment-16387171
 ] 

SammiChen commented on HDFS-11600:
--

The output links in last build expired. Try to trigger the build again. 

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Priority: Minor
> Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, 
> HDFS-11600.003.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13233) RBF:getMountPoint return wrong mount point if the input path contains the mount point

2018-03-05 Thread wangzhiyuan (JIRA)
wangzhiyuan created HDFS-13233:
--

 Summary: RBF:getMountPoint return wrong mount point if the input 
path contains the mount point
 Key: HDFS-13233
 URL: https://issues.apache.org/jira/browse/HDFS-13233
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.2.0
Reporter: wangzhiyuan


Suppose the mount point table include: "/" "/user"  "/user/test" "/user/test1", 
if the input path is "/user/test111", the return mount point of 
(MountTableResolver->getMountPoint) is "/user/test1", but the correct one 
should be "/user". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387135#comment-16387135
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13223:


Patch looks good.  We should also reuse the children diff in remove(..) when 
possible.

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387106#comment-16387106
 ] 

Íñigo Goiri commented on HDFS-13232:


This together with HDFS-13230 shows a clear hole in the unit testing for the 
connection manager.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387091#comment-16387091
 ] 

genericqa commented on HDFS-13224:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.federation.router.TestRouterHashResolver |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913115/HDFS-13224.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a8a69f581f69 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 245751f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23306/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23306/testReport/ |
| Max. 

[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387089#comment-16387089
 ] 

Zsolt Venczel commented on HDFS-13176:
--

[~mackrorysd] thanks a lot for taking a look and sharing your thoughts!

In WebHdfsFileSystem.toUrl line 609 I added a url decode attempt in order to 
being backward compatible with HDFS-4943. [~jerryhe], [~szetszwo] do you have 
any thoughts or concerns about it?

The failing unit tests seem to be unrelated to the change.

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-05 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387080#comment-16387080
 ] 

Dennis Huo commented on HDFS-13056:
---

Uploaded [^HDFS-13056-branch-2.8.004.patch] and [^HDFS-13056.007.patch] to fix 
the new checkstyle errors

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> HDFS-13056.006.patch, HDFS-13056.007.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-05 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056.007.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> HDFS-13056.006.patch, HDFS-13056.007.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-05 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056-branch-2.8.004.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> HDFS-13056.006.patch, HDFS-13056.007.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-05 Thread Wei Yan (JIRA)
Wei Yan created HDFS-13232:
--

 Summary: RBF: ConnectionPool should return first usable connection
 Key: HDFS-13232
 URL: https://issues.apache.org/jira/browse/HDFS-13232
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Wei Yan
Assignee: Wei Yan


In current ConnectionPool.getConnection(), it will return the first active 
connection:
{code:java}
for (int i=0; i

[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387012#comment-16387012
 ] 

Íñigo Goiri commented on HDFS-13224:


Thanks [~ywskycn] for the comments.
* {{RANDOM}} should do the same yes. However, we have been setting up manually 
and using it as read only so it wasn't that relevant. I'll have to do some work 
to fully support RANDOM. I'm not sure how to support writing files though.
* I think {{HashFirstResolver}} it's using the super class {{HashResolver}} to 
do the temporary file extraction.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386998#comment-16386998
 ] 

Wei Yan commented on HDFS-13224:


Thanks for the patch, [~elgoiri]. Took a quick pass, and the entire design LGTM.

Have some questions here:
 * RouterRpcServer.isPathAll() function, currently it returns true if mount 
point is HASH_ALL. For some others like RANDOM, it should also return true, 
right?
 * For HashFirstResolver, do we also need to handle temorary naming pattern 
there?

One minor thing, in MultipleDestinationMountTableResolver.java, the javadoc "It 
has three options to order the locations:" should be updated to four.

 

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13008:
--
Attachment: HDFS-13008-HDFS-7240.005.patch

> Ozone: Add DN container open/close state to container report
> 
>
> Key: HDFS-13008
> URL: https://issues.apache.org/jira/browse/HDFS-13008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13008-HDFS-7240.001.patch, 
> HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, 
> HDFS-13008-HDFS-7240.004.patch, HDFS-13008-HDFS-7240.005.patch
>
>
> HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
> ticket is opened to add the DN container close state to container report so 
> that SCM container state manager can update state from closing to closed when 
> DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-03-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386981#comment-16386981
 ] 

Xiaoyu Yao commented on HDFS-13008:
---

{quote}bq. Since we are already setting {{containerID}} as part of constructor 
(line:80), we can remove {{data.setContainerID(protoData.getContainerID())}} 
call.
{quote}
Good catch [~nandakumar131] , I removed the setter for containerID in patch v5. 

> Ozone: Add DN container open/close state to container report
> 
>
> Key: HDFS-13008
> URL: https://issues.apache.org/jira/browse/HDFS-13008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13008-HDFS-7240.001.patch, 
> HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, 
> HDFS-13008-HDFS-7240.004.patch
>
>
> HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
> ticket is opened to add the DN container close state to container report so 
> that SCM container state manager can update state from closing to closed when 
> DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386964#comment-16386964
 ] 

genericqa commented on HDFS-13170:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 3 new + 396 unchanged - 7 fixed = 399 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
6s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13170 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913113/HDFS-13170.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ca73eb4c03fd 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 245751f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23307/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23307/testReport/ |
| Max. process+thread count | 670 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23307/console |
| Powered by | Apache Yetus 

[jira] [Updated] (HDFS-12977) Add stateId to RPC headers.

2018-03-05 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-12977:

Attachment: HDFS_12977.trunk.003.patch

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.

2018-03-05 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386960#comment-16386960
 ] 

Plamen Jeliazkov commented on HDFS-12977:
-

Hey Konst,

My logic around (1) had been that I followed the structure of Server/Client 
closely. I viewed the problem as a matter of a getter and setter that lived 
independently of each other and so felt 2 different interfaces were necessary. 
In my latest patch, however, I attempted to merge txidProvider and txidObserver 
into a new interface, CallStateHandler, with a generic. The generic is to 
account for that Client is expecting an update in terms of a long value 
parameter versus Server which does not expect anything. The generic also allows 
us to be extensible in the future and add more complex type parameters.

I believe I have now addressed (2) and removed most of the code out of Server 
minus the transactionId info needed from Call.

For (3) I removed all traces of txid from RPC and only made the changes to pass 
the new CallStateHandler down. Definitions for CallStateHandler are the 
FSNamesystem (Server) and the DFSClient (Client).

Do you see any issues with the way I deal with Client and DFSClient? Most 
notably I had to make a static method call in DFSClient instantiation.

We have again shrunk down the size of the patch to just 30KB now.

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-13056:


Assignee: Dennis Huo

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386917#comment-16386917
 ] 

Sean Mackrory edited comment on HDFS-13176 at 3/5/18 11:15 PM:
---

At first glance, I love that this is addressing URL-specific weirdness in 
WebHDFS specifically and isn't modifying Path for the whole world. I'll do a 
deeper code review and make sure I do agree with everything, but I strongly 
suspect this is indeed the right way to solve this.

One of the outcomes of this Jira I'd love to see is that we do finally clarify 
which paths are legal paths. ?, %, and \ make me nervous for the potential for 
them to be misinterpreted by related tools or future classes because of their 
role in URLs and Windows paths (which is exactly why : wouldn't work). There 
are a few other characters that I would personally avoid because of their role 
in scripts (I would have similar concerns about ~, *, `, @, !, $), etc, but I'd 
feel more comfortable agreeing to support the following subset of what you're 
currently testing: \"()[]_-=&+;{}#. Anyone else have thoughts along these lines?


was (Author: mackrorysd):
At first glance, I love that this is addressing URL-specific weirdness in 
WebHDFS specifically and isn't modifying Path for the whole world. I'll do a 
deeper code review and make sure I do agree with everything, but I strongly 
suspect this is indeed the right way to solve this.

One of the outcomes of this Jira I'd love to see is that we do finally clarify 
which paths are legal paths. ?, %, and \\ make me nervous for the potential for 
them to be misinterpreted by related tools or future classes because of their 
role in URLs and Windows paths (which is exactly why : wouldn't work). There 
are a few other characters that I would personally avoid because of their role 
in scripts (I would have similar concerns about ~, *, `, @, !, $), etc, but I'd 
feel more comfortable agreeing to support the following subset of what you're 
currently testing: \"()[]_-=&+;{}#. Anyone else have thoughts along these lines?

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386937#comment-16386937
 ] 

genericqa commented on HDFS-13148:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913096/HDFS-13148.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 719845a73dcc 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 245751f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23304/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23304/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  

[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386917#comment-16386917
 ] 

Sean Mackrory commented on HDFS-13176:
--

At first glance, I love that this is addressing URL-specific weirdness in 
WebHDFS specifically and isn't modifying Path for the whole world. I'll do a 
deeper code review and make sure I do agree with everything, but I strongly 
suspect this is indeed the right way to solve this.

One of the outcomes of this Jira I'd love to see is that we do finally clarify 
which paths are legal paths. ?, %, and \\ make me nervous for the potential for 
them to be misinterpreted by related tools or future classes because of their 
role in URLs and Windows paths (which is exactly why : wouldn't work). There 
are a few other characters that I would personally avoid because of their role 
in scripts (I would have similar concerns about ~, *, `, @, !, $), etc, but I'd 
feel more comfortable agreeing to support the following subset of what you're 
currently testing: \"()[]_-=&+;{}#. Anyone else have thoughts along these lines?

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386899#comment-16386899
 ] 

genericqa commented on HDFS-13055:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1266 unchanged - 2 fixed = 1267 total (was 1268) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913092/HDFS-13055.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux be73a5c2d137 3.13.0-135-generic #184-Ubuntu SMP 

[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386887#comment-16386887
 ] 

Xiao Chen commented on HDFS-13170:
--

bq. checkstyle ...
Sorry I missed the fact that this is due to the existing switch block. Let's 
keep it this way to be more consistent with existing code.

Will have a final pass once pre-commit comes back. Thanks!

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13224:
---
Attachment: HDFS-13224.002.patch

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-05 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-13170:
-
Attachment: HDFS-13170.003.patch

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386872#comment-16386872
 ] 

genericqa commented on HDFS-13176:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13176 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913095/HDFS-13176.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1479e1d5deb3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-05 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386867#comment-16386867
 ] 

Stephen O'Donnell commented on HDFS-13170:
--

[~xiaochen] Thanks for the review.

* Syntax of the if statement - I agree I have changed it to your suggestion. I 
kind of coped this from the webhdfs code which was a bit lazy of me!
* Double line break - this should be fixed.
* Audit log changes - OK, I have moved the new part to the end.
* For the check style warnings, I think the formatting of the original code is 
broken, and if I fix those 3 lines highlighted it would not be correct in the 
current formatting. Eg check line 582 in HttpFSServer.java - it aligns 
perfectly with the rest of the code block, but if I obey the checkstyle rules 
(it wants me to remove two spaces) it will not be indented correctly with the 
exiting code. The other two lines are the same. I can fix it, but I feel it 
would make things worse - what do you think?

I will upload v3 of the patch for review now.

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13191) Internal buffer-sizing details are inadvertently baked into FileChecksum and BlockGroupChecksum

2018-03-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386395#comment-16386395
 ] 

Ajay Kumar edited comment on HDFS-13191 at 3/5/18 10:11 PM:


 Ya, it seems there is no good solution to this.
{quote} If getData() always allocates a new array, it likely eliminates any 
(perceived) advantage of using DataOutputBuffer in the first place{quote}
Modifying getData() to return only the trimmed copy of the array will result in 
overhead of new byte array allocation. Although we can minimize this by 
wrapping it with an if condition. (Create new copy only when count< length). 
{quote}Even though its interface doesn't promise anything about the size of the 
returned byte[], the fact that the checksums are sensitive to it means at least 
some places are apparently (unintentionally) relying on the exact buffer-sizing 
behavior already, so changing it inside DataOutputBuffer directly has a higher 
risk of breaking unexpected system components.{quote} Ya, but leaving it as it 
is may also involve risk of breaking some future functionality.
I don't feel strongly about any specific approach but leaving bug as it is may 
result in more innocuous  critical bugs somewhere else. At the minimum we 
should add more documentation to it to warn users about this bug.


was (Author: ajayydv):
 Ya, it seems there is no good solution to this.
{quote} If getData() always allocates a new array, it likely eliminates any 
(perceived) advantage of using DataOutputBuffer in the first place{quote}
Modifying getData() to return only the trimmed copy of the array will result in 
overhead of new byte array allocation. Although we can minimize this by 
wrapping it with an if condition. (Create new copy only when count< length). 
{quote}Even though its interface doesn't promise anything about the size of the 
returned byte[], the fact that the checksums are sensitive to it means at least 
some places are apparently (unintentionally) relying on the exact buffer-sizing 
behavior already, so changing it inside DataOutputBuffer directly has a higher 
risk of breaking unexpected system components.{quote} Ya, but leaving it as it 
is may also involve risk of breaking critical functionality.

I don't feel strongly about any specific approach but leaving bug as it is may 
result in more innocuous  critical bugs somewhere else. At the minimum we 
should add more documentation to it to warn users about this bug.

> Internal buffer-sizing details are inadvertently baked into FileChecksum and 
> BlockGroupChecksum
> ---
>
> Key: HDFS-13191
> URL: https://issues.apache.org/jira/browse/HDFS-13191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Dennis Huo
>Priority: Minor
> Attachments: HDFS-13191.001.patch
>
>
> {color:red}colored text{color}The org.apache.hadoop.io.DataOutputBuffer is 
> used as an "optimization" in many places to allow a reusable form of 
> ByteArrayOutputStream, but requires the caller to be careful to use 
> getLength() instead of getData().length to determine the number of actually 
> valid bytes to consume.
> At least three places in the path of constructing FileChecksums have 
> incorrect usage of DataOutputBuffer:
> [FileChecksumHelper digesting block 
> MD5s|https://github.com/apache/hadoop/blob/329a4fdd07ab007615f34c8e0e651360f988064d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L239]
> [BlockChecksumHelper digesting striped block MD5s to construct block-group 
> checksum|https://github.com/apache/hadoop/blob/329a4fdd07ab007615f34c8e0e651360f988064d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java#L412]
> [MD5MD5CRC32FileChecksum.getBytes()|https://github.com/apache/hadoop/blob/329a4fdd07ab007615f34c8e0e651360f988064d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java#L76]
> The net effect is that FileChecksum consumes exact BlockChecksums if there 
> are 1 or 2 blocks (at 16 and 32 bytes respectively), but at 3 blocks will 
> round up to 64 bytes, effectively returning the same FileChecksum as if there 
> were 4 blocks and the 4th block happened to have an MD5 exactly equal to 
> 0x00...00. Similarly, BlockGroupChecksum will behave as if there is a 
> power-of-2 number of bytes from BlockChecksums in the BlockGroup.
> This appears to have been a latent bug for at least 9 years for FileChecksum 
> (and since inception for the implementation of striped files), and works fine 
> as long as HDFS implementations strictly stick to the same internal buffering 
> semantics.
> However, this also makes the implementation 

[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386831#comment-16386831
 ] 

genericqa commented on HDFS-13224:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
25s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913105/HDFS-13224.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a77d9d3f5941 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 245751f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23305/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23305/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23305/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386821#comment-16386821
 ] 

genericqa commented on HDFS-13222:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 627 unchanged - 3 fixed = 630 total (was 630) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestExtendedAcls |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestMultiThreadedHflush |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | 

[jira] [Updated] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13224:
---
Attachment: HDFS-13224.001.patch

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-240) Should HDFS restrict the names used for files?

2018-03-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386758#comment-16386758
 ] 

John Zhuge commented on HDFS-240:
-

Unfortunately don't have time to work on it.

> Should HDFS restrict the names used for files?
> --
>
> Key: HDFS-240
> URL: https://issues.apache.org/jira/browse/HDFS-240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.2
>Reporter: Robert Chansler
>Priority: Major
>
> When reviewing the consequences of HADOOP-6017 (the name system could not 
> start because a file name interpreted as a regex caused a fault), the 
> discussion turned to improving the test set for file system functions by 
> broadening the set of names used for testing. Presently, HDFS allows any name 
> without a slash. _Should the space of names be restricted?_ If most funny 
> names are unintended, maybe the user would benefit from an early error 
> indication. A contrary view is that restricting names is so 20th-century.
> Should be or shouldn't we?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-240) Should HDFS restrict the names used for files?

2018-03-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HDFS-240:
---

Assignee: (was: John Zhuge)

> Should HDFS restrict the names used for files?
> --
>
> Key: HDFS-240
> URL: https://issues.apache.org/jira/browse/HDFS-240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.2
>Reporter: Robert Chansler
>Priority: Major
>
> When reviewing the consequences of HADOOP-6017 (the name system could not 
> start because a file name interpreted as a regex caused a fault), the 
> discussion turned to improving the test set for file system functions by 
> broadening the set of names used for testing. Presently, HDFS allows any name 
> without a slash. _Should the space of names be restricted?_ If most funny 
> names are unintended, maybe the user would benefit from an early error 
> indication. A contrary view is that restricting names is so 20th-century.
> Should be or shouldn't we?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13164) File not closed if streamer fail with DSQuotaExceededException

2018-03-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13164:
-
   Resolution: Fixed
Fix Version/s: 2.8.4
   Status: Resolved  (was: Patch Available)

> File not closed if streamer fail with DSQuotaExceededException
> --
>
> Key: HDFS-13164
> URL: https://issues.apache.org/jira/browse/HDFS-13164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-13164.01.patch, HDFS-13164.02.patch, 
> HDFS-13164.branch-2.8.01.patch
>
>
>  This is found during yarn log aggregation but theoretically could happen to 
> any client.
> If the dir's space quota is exceeded, the following would happen when a file 
> is created:
>  - client {{startFile}} rpc to NN, gets a {{DFSOutputStream}}.
>  - writing to the stream would trigger the streamer to {{getAdditionalBlock}} 
> rpc to NN, which would get the DSQuotaExceededException
>  - client closes the stream
>   
>  The fact that this would leave a 0-sized (or whatever size left in the 
> quota) file in HDFS is beyond the scope of this jira. However, the file would 
> be left in openforwrite status (shown in {{fsck -openforwrite)}} at least, 
> and could potentially leak leaseRenewer too.
> This is because in the close implementation,
>  # {{isClosed}} is first checked, and the close call will be a no-op if 
> {{isClosed == true}}.
>  # {{flushInternal}} checks {{isClosed}}, and throws the exception right away 
> if true
> {{isClosed}} does this: {{return closed || getStreamer().streamerClosed;}}
> When the disk quota is reached, {{getAdditionalBlock}} will throw when the 
> streamer calls. Because the streamer runs in a separate thread, at the time 
> the client calls close on the stream, the streamer may or may not have 
> reached the Quota exception. If it has, then due to #1, the close call on the 
> stream will be no-op. If it hasn't, then due to #2 the {{completeFile}} logic 
> will be skipped.
> {code:java}
> protected synchronized void closeImpl() throws IOException {
> if (isClosed()) {
>   IOException e = lastException.getAndSet(null);
>   if (e == null)
> return;
>   else
> throw e;
> }
>   try {
> flushBuffer(); // flush from all upper layers
> ...
> flushInternal(); // flush all data to Datanodes
> // get last block before destroying the streamer
> ExtendedBlock lastBlock = getStreamer().getBlock();
> try (TraceScope ignored =
>dfsClient.getTracer().newScope("completeFile")) {
>completeFile(lastBlock);
> }
>} catch (ClosedChannelException ignored) {
>} finally {
>  closeThreads(true);
>}
>  }
>  {code}
> Log snippets:
> {noformat}
> 2018-02-16 15:59:32,916 DEBUG org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Quota Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /DIR is exceeded: quota = 200 B = 1.91 MB but diskspace consumed = 
> 404139552 B = 385.42 MB
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:149)
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:159)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:2124)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1991)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1966)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:463)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3484)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at 

[jira] [Commented] (HDFS-13164) File not closed if streamer fail with DSQuotaExceededException

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386742#comment-16386742
 ] 

Xiao Chen commented on HDFS-13164:
--

Committed to branch-2.8. Thanks all.

> File not closed if streamer fail with DSQuotaExceededException
> --
>
> Key: HDFS-13164
> URL: https://issues.apache.org/jira/browse/HDFS-13164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-13164.01.patch, HDFS-13164.02.patch, 
> HDFS-13164.branch-2.8.01.patch
>
>
>  This is found during yarn log aggregation but theoretically could happen to 
> any client.
> If the dir's space quota is exceeded, the following would happen when a file 
> is created:
>  - client {{startFile}} rpc to NN, gets a {{DFSOutputStream}}.
>  - writing to the stream would trigger the streamer to {{getAdditionalBlock}} 
> rpc to NN, which would get the DSQuotaExceededException
>  - client closes the stream
>   
>  The fact that this would leave a 0-sized (or whatever size left in the 
> quota) file in HDFS is beyond the scope of this jira. However, the file would 
> be left in openforwrite status (shown in {{fsck -openforwrite)}} at least, 
> and could potentially leak leaseRenewer too.
> This is because in the close implementation,
>  # {{isClosed}} is first checked, and the close call will be a no-op if 
> {{isClosed == true}}.
>  # {{flushInternal}} checks {{isClosed}}, and throws the exception right away 
> if true
> {{isClosed}} does this: {{return closed || getStreamer().streamerClosed;}}
> When the disk quota is reached, {{getAdditionalBlock}} will throw when the 
> streamer calls. Because the streamer runs in a separate thread, at the time 
> the client calls close on the stream, the streamer may or may not have 
> reached the Quota exception. If it has, then due to #1, the close call on the 
> stream will be no-op. If it hasn't, then due to #2 the {{completeFile}} logic 
> will be skipped.
> {code:java}
> protected synchronized void closeImpl() throws IOException {
> if (isClosed()) {
>   IOException e = lastException.getAndSet(null);
>   if (e == null)
> return;
>   else
> throw e;
> }
>   try {
> flushBuffer(); // flush from all upper layers
> ...
> flushInternal(); // flush all data to Datanodes
> // get last block before destroying the streamer
> ExtendedBlock lastBlock = getStreamer().getBlock();
> try (TraceScope ignored =
>dfsClient.getTracer().newScope("completeFile")) {
>completeFile(lastBlock);
> }
>} catch (ClosedChannelException ignored) {
>} finally {
>  closeThreads(true);
>}
>  }
>  {code}
> Log snippets:
> {noformat}
> 2018-02-16 15:59:32,916 DEBUG org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Quota Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /DIR is exceeded: quota = 200 B = 1.91 MB but diskspace consumed = 
> 404139552 B = 385.42 MB
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:149)
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:159)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:2124)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1991)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1966)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:463)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3484)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> 

[jira] [Commented] (HDFS-13164) File not closed if streamer fail with DSQuotaExceededException

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386739#comment-16386739
 ] 

Xiao Chen commented on HDFS-13164:
--

Test failures look unrelated to patch. Had an cdh test that passed.
Given this is import-only change, I'm committing it 

> File not closed if streamer fail with DSQuotaExceededException
> --
>
> Key: HDFS-13164
> URL: https://issues.apache.org/jira/browse/HDFS-13164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13164.01.patch, HDFS-13164.02.patch, 
> HDFS-13164.branch-2.8.01.patch
>
>
>  This is found during yarn log aggregation but theoretically could happen to 
> any client.
> If the dir's space quota is exceeded, the following would happen when a file 
> is created:
>  - client {{startFile}} rpc to NN, gets a {{DFSOutputStream}}.
>  - writing to the stream would trigger the streamer to {{getAdditionalBlock}} 
> rpc to NN, which would get the DSQuotaExceededException
>  - client closes the stream
>   
>  The fact that this would leave a 0-sized (or whatever size left in the 
> quota) file in HDFS is beyond the scope of this jira. However, the file would 
> be left in openforwrite status (shown in {{fsck -openforwrite)}} at least, 
> and could potentially leak leaseRenewer too.
> This is because in the close implementation,
>  # {{isClosed}} is first checked, and the close call will be a no-op if 
> {{isClosed == true}}.
>  # {{flushInternal}} checks {{isClosed}}, and throws the exception right away 
> if true
> {{isClosed}} does this: {{return closed || getStreamer().streamerClosed;}}
> When the disk quota is reached, {{getAdditionalBlock}} will throw when the 
> streamer calls. Because the streamer runs in a separate thread, at the time 
> the client calls close on the stream, the streamer may or may not have 
> reached the Quota exception. If it has, then due to #1, the close call on the 
> stream will be no-op. If it hasn't, then due to #2 the {{completeFile}} logic 
> will be skipped.
> {code:java}
> protected synchronized void closeImpl() throws IOException {
> if (isClosed()) {
>   IOException e = lastException.getAndSet(null);
>   if (e == null)
> return;
>   else
> throw e;
> }
>   try {
> flushBuffer(); // flush from all upper layers
> ...
> flushInternal(); // flush all data to Datanodes
> // get last block before destroying the streamer
> ExtendedBlock lastBlock = getStreamer().getBlock();
> try (TraceScope ignored =
>dfsClient.getTracer().newScope("completeFile")) {
>completeFile(lastBlock);
> }
>} catch (ClosedChannelException ignored) {
>} finally {
>  closeThreads(true);
>}
>  }
>  {code}
> Log snippets:
> {noformat}
> 2018-02-16 15:59:32,916 DEBUG org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Quota Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /DIR is exceeded: quota = 200 B = 1.91 MB but diskspace consumed = 
> 404139552 B = 385.42 MB
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:149)
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:159)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:2124)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1991)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1966)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:463)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3484)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> 

[jira] [Commented] (HDFS-13164) File not closed if streamer fail with DSQuotaExceededException

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386729#comment-16386729
 ] 

genericqa commented on HDFS-13164:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:33 |
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestFileLengthOnClusterRestart |
|   | hadoop.hdfs.TestBlockMissingException |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteRead |
|   | org.apache.hadoop.hdfs.TestHdfsAdmin |
|   | org.apache.hadoop.hdfs.TestFileCreationEmpty |
|   | org.apache.hadoop.hdfs.TestFileCreationClient |
|   | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestDataTransferKeepalive |
|   | org.apache.hadoop.hdfs.TestFileAppend |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade |
|   | org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade |
|   | org.apache.hadoop.hdfs.TestReadWhileWriting |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
|   | org.apache.hadoop.hdfs.TestLease |
|   | org.apache.hadoop.hdfs.TestHDFSServerPorts |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeRollback |
|   | org.apache.hadoop.hdfs.TestRenameWhileOpen |
|   | 

[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386710#comment-16386710
 ] 

Rushabh S Shah commented on HDFS-13109:
---

Please ignore my last comment.
bq. However, the path parameter of HdfsAdmin#provisionEncryptionZoneTrash has 
never been fully resolved by itself or the caller from CryptoAdmin.
I thought {{provisionTrash}} is also a {{superuser-only}} command but it 
doesn't seems like that.
So I agree with the fix in v4 of the patch.
Other than test file changes, I am +1 for the patch.
Sorry for the noise.
Thanks [~hanishakoneru] for the contribution.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386687#comment-16386687
 ] 

Rushabh S Shah commented on HDFS-13109:
---

bq. we will still miss the relative path and path resolution handling
The {{createEncryptionZone}} command is run as superuser so you have to specify 
the whole path and not relative path.
It doesn't make sense to create EZ for superuser owned directory.

Regarding path resolution, we don't support symlinks on client side.
If we are considering clusters where we enable symlinks on client side, then 
resolving the user supplied path makes sense.
[~xyao]: Let me know what you think.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-05 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386677#comment-16386677
 ] 

Hanisha Koneru commented on HDFS-13148:
---

Thanks for the review [~xyao]. Addressed all the comments in patch v02.

I did not fix all the checkstyle warnings as the link has expired. I will fix 
it after the next jenkins run.

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-05 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13148:
--
Attachment: HDFS-13148.002.patch

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13227) Add a method to calculate cumulative diff over multiple snapshots in DirectoryDiffList

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386663#comment-16386663
 ] 

genericqa commented on HDFS-13227:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13227 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913029/HDFS-13227.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b1b7c3de8cf5 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8110d6a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23298/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23298/testReport/ |
| Max. 

[jira] [Updated] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13176:
-
Attachment: HDFS-13176.01.patch

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-05 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13176:
-
Status: Patch Available  (was: Open)

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13230) RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns

2018-03-05 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HDFS-13230:
---

Assignee: Chao Sun

> RBF: ConnectionManager's cleanup task will compare each pool's own active 
> conns with its total conns
> 
>
> Key: HDFS-13230
> URL: https://issues.apache.org/jira/browse/HDFS-13230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Minor
>
> In the cleanup task:
> {code:java}
> long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
> int total = pool.getNumConnections();
> int active = getNumActiveConnections();
> if (timeSinceLastActive > connectionCleanupPeriodMs ||
> {code}
> the 3rd line should be pool.getNumActiveConnections()
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386638#comment-16386638
 ] 

genericqa commented on HDFS-13224:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
40s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913089/HDFS-13224.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06057b053889 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2e1e049 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23301/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23301/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 

[jira] [Commented] (HDFS-13228) Ozone: Add support for rename key within a bucket for rpc client

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386609#comment-16386609
 ] 

genericqa commented on HDFS-13228:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 13s{color} | {color:orange} root: The patch generated 3 new + 2 unchanged - 
0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}248m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:215a942 |
| JIRA Issue | HDFS-13228 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-03-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386595#comment-16386595
 ] 

Ajay Kumar commented on HDFS-13055:
---

Failed tests pass locally. Added javdoc comment in patch v6 to new 
package-info.java to address checkstyle issue.

> Aggregate usage statistics from datanodes
> -
>
> Key: HDFS-13055
> URL: https://issues.apache.org/jira/browse/HDFS-13055
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13055.001.patch, HDFS-13055.002.patch, 
> HDFS-13055.003.patch, HDFS-13055.004.patch, HDFS-13055.005.patch, 
> HDFS-13055.006.patch
>
>
> We collect variety of statistics in DataNodes and expose them via JMX. 
> Aggregating some of the high level statistics which we are already collecting 
> in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable 
> time window will create a central repository accessible via JMX and UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >