[jira] [Updated] (HDFS-9658) Erasure Coding: allow to use multiple EC policies in striping related tests
[ https://issues.apache.org/jira/browse/HDFS-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-9658: Summary: Erasure Coding: allow to use multiple EC policies in striping related tests (was: Erasure Coding: make tests compatible with multiple EC policies [Part 1]) > Erasure Coding: allow to use multiple EC policies in striping related tests > --- > > Key: HDFS-9658 > URL: https://issues.apache.org/jira/browse/HDFS-9658 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Rui Li > Attachments: HDFS-9658.1.patch > > > Currently many of the EC-related tests assume we're using RS-6-3 > schema/policy. There're lots of hard coded fields as well as computations > based on that. To support multiple EC policies, we need to remove these hard > coded logic and make the tests more flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9646) ErasureCodingWorker may fail when recovering data blocks with length less than the first internal block
[ https://issues.apache.org/jira/browse/HDFS-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105377#comment-15105377 ] Kai Zheng commented on HDFS-9646: - The above logic (in the method comment) may not hold well in striping mode and may need be refined. > ErasureCodingWorker may fail when recovering data blocks with length less > than the first internal block > --- > > Key: HDFS-9646 > URL: https://issues.apache.org/jira/browse/HDFS-9646 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Takuya Fukudome >Assignee: Jing Zhao >Priority: Critical > Attachments: HDFS-9646.000.patch, HDFS-9646.001.patch, > HDFS-9646.002.patch, test-reconstruct-stripe-file.patch > > > This is reported by [~tfukudom]: ErasureCodingWorker may fail with the > following exception when recovering a non-full internal block. > {code} > 2016-01-06 11:14:44,740 WARN datanode.DataNode > (ErasureCodingWorker.java:run(467)) - Failed to recover striped block: > BP-987302662-172.29.4.13-1450757377698:blk_-92233720368 > 54322288_29751 > java.io.IOException: Transfer failed for all targets. > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$ReconstructAndTransferBlock.run(ErasureCodingWorker.java:455) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9659) EditLogTailerThread to Active Namenode RPC should timeout
[ https://issues.apache.org/jira/browse/HDFS-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105324#comment-15105324 ] Kai Zheng commented on HDFS-9659: - By the way, suggest use {{RPC#getProxy}} instead. The {{waitForProxy}} call with the {{Long.MAX_VALUE}} ineffective parameter doesn't look clean. > EditLogTailerThread to Active Namenode RPC should timeout > - > > Key: HDFS-9659 > URL: https://issues.apache.org/jira/browse/HDFS-9659 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, namenode >Affects Versions: 3.0.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Critical > Attachments: HDFS-9659.patch > > > {{EditLogTailerThread}} to Active {{Namenode}} RPC doesn't have timeout and > it’s removed in HDFS-6440. > When inject the disk slow and consume system IO to the active name node, the > nameservice can't switch and this is because SNN not able to stop > {{EditLogTailerThread}}. > *Thread dump from SNN* > {noformat} > "IPC Server handler 33 on 25000" #118 daemon prio=5 os_prio=0 > tid=0x7f2384409800 nid=0x26c89 in Object.wait() [0x7f2376ac7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Thread.join(Thread.java:1245) > - locked <0x0006d517f538> (a > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread) > at java.lang.Thread.join(Thread.java:1319) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.stop(EditLogTailer.java:183) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopStandbyServices(FSNamesystem.java:1284) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopStandbyServices(NameNode.java:1852) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.exitState(StandbyState.java:72) > at > org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:62) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1684) > {noformat} > *Thread dump for {{EditLogTailerThread}}*, it is stuck in > {{NamenodeProtocolTranslatorPB.rollEditLog()}} rpc call. > {noformat} > "Edit log tailer" #150 prio=5 os_prio=0 tid=0x7f2395569800 nid=0x26cac in > Object.wait() [0x7f2374aa7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:502) > at org.apache.hadoop.ipc.Client.call(Client.java:1503) > - locked <0x0006d581bb90> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1448) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:420) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9646) ErasureCodingWorker may fail when recovering data blocks with length less than the first internal block
[ https://issues.apache.org/jira/browse/HDFS-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105368#comment-15105368 ] Kai Zheng commented on HDFS-9646: - The change of {{reportCheckSumFailure}} obviously corrected a coding error. The corrupted map should be iterated and processed all. Good catch! {code} /** * DFSInputStream reports checksum failure. * Case I : client has tried multiple data nodes and at least one of the * attempts has succeeded. We report the other failures as corrupted block to * namenode. * Case II: client has tried out all data nodes, but all failed. We * only report if the total number of replica is 1. We do not * report otherwise since this maybe due to the client is a handicapped client * (who can not read). * @param corruptedBlockMap map of corrupted blocks * @param dataNodeCount number of data nodes who contains the block replicas */ void reportCheckSumFailure(MapcorruptedBlockMap, int dataNodeCount) {code} Looking at the comment and the method signature, it's kinds of confusing. The map contains multiple blocks to report or not, for each block, it looks like to need a {{dataNodeCount}} value to decide report or not according to the reasonable logic. However, only one dataNodeCount value is passed to. Looking at the place how it's called, the locations of current block is used. Not sure this is related to the reported issue and better to be handled here, though. > ErasureCodingWorker may fail when recovering data blocks with length less > than the first internal block > --- > > Key: HDFS-9646 > URL: https://issues.apache.org/jira/browse/HDFS-9646 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Takuya Fukudome >Assignee: Jing Zhao >Priority: Critical > Attachments: HDFS-9646.000.patch, HDFS-9646.001.patch, > HDFS-9646.002.patch, test-reconstruct-stripe-file.patch > > > This is reported by [~tfukudom]: ErasureCodingWorker may fail with the > following exception when recovering a non-full internal block. > {code} > 2016-01-06 11:14:44,740 WARN datanode.DataNode > (ErasureCodingWorker.java:run(467)) - Failed to recover striped block: > BP-987302662-172.29.4.13-1450757377698:blk_-92233720368 > 54322288_29751 > java.io.IOException: Transfer failed for all targets. > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$ReconstructAndTransferBlock.run(ErasureCodingWorker.java:455) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9658) Erasure Coding: make tests compatible with multiple EC policies [Part 1]
[ https://issues.apache.org/jira/browse/HDFS-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105366#comment-15105366 ] Hadoop QA commented on HDFS-9658: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 16s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 2s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 165m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.TestINodeFile | | | hadoop.hdfs.server.namenode.TestFileTruncate | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12782862/HDFS-9658.1.patch | | JIRA Issue | HDFS-9658 | | Optional Tests | asflicense
[jira] [Commented] (HDFS-8430) Erasure coding: compute file checksum for stripe files
[ https://issues.apache.org/jira/browse/HDFS-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105005#comment-15105005 ] Kai Zheng commented on HDFS-8430: - Status update FileSystem: * Added new API {{getFileChecksum(String algorithm)}} similar to the existing old API {{getFileChecksum}}, for a file for data all or a range. * Added new API {{supportChecksumAlgorithm(String algorithm)}}. Data transfer protocol: * Added a new protocol method {{blockGroupChecksum(StripedBlockInfo blockGroupInfo, int mode, BlockToken token)}} to calculate the MD5 aggregation result for a striping block group in DataNode side, for both old and new APIs. * Mode 1 for old API, simply summing all the block checksum data in the group one by one as they're replicated blocks * Mode 2 for new API, dividing and summing all the block checksum data in striping/cell sense. * In both modes, in case data blocks missed, on demand recovering the blocks and recomputing the block checksum data. No stored and discarded after used. Recovering logic shares the existing codes in {{ErasureCodingWorker}} as possible via refactoring. * Added a new protocol method {{rawBlockChecksum()}} to retrieve the whole raw block checksum or CRC32 data. For simple, getting all the data in a pass, to consider multiple passes. This is for the new API because a block group checksum computer needs to retrieve all the block checksum data in the group to the place so able to reorganize in data strips and compute block group checksum as contiguous blocks do. In client side: * Introduced {{ReplicatedFileChecksumComputer1}}, {{ReplicatedFileChecksumComputer2}}, {{StripedFileChecksumComputer1}} and {{StripedFileChecksumComputer2}}, these sharing codes as possible and refactoring related client side codes. * ReplicatedFileChecksumComputer1 for the old API and replicated files, refactoring and using existing logics. * ReplicatedFileChecksumComputer2 for the new API and replicated files, similar to ReplicatedFileChecksumComputer1 but with awareness of cell. The block in its question should be exactly divided by the cell size. Otherwise, cell64k like algorithm not supported exception. * StripedFileChecksumComputer1 for the old API, summing all the block group checksum data together, for each block group, calling blockGroupChecksum using mode 1. * StripedFileChecksumComputer2 for the new API, summing all the block group checksum data together, for each block group, calling blockGroupChecksum using mode 2. In datanode side: * Introduced {{BlockChecksumComputer}}, {{BlockGroupChecksumComputer1}} and {{BlockGroupChecksumComputer2}}, these sharing codes as possible and refactoring related DataNode side codes. * BlockChecksumComputer for the old API and replicated blocks, refactoring and using existing logics. * BlockGroupChecksumComputer1 for the old API, summing all the block checksum data together in the group, for each block, calling existing {{blockChecksum()}} method in the data transfer protocol. * BlockGroupChecksumComputer2 for the new API, summing all the strip checksum data together in the group, for each block, calling the new method {{rawBlockChecksum()}} in the data transfer protocol. DistCp * TODO, will use the two added new APIs to checksum and compare for the source and target files. The codes are still messy, and leave many blanks. Will attach a large patch for taking a look when the two APIs are able to work as expected. Seems to break down. The function is small, but gets big when implements. Very possibly missed some points, thanks for comments and suggestions, as always. > Erasure coding: compute file checksum for stripe files > -- > > Key: HDFS-8430 > URL: https://issues.apache.org/jira/browse/HDFS-8430 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7285 >Reporter: Walter Su >Assignee: Kai Zheng > Attachments: HDFS-8430-poc1.patch > > > HADOOP-3981 introduces a distributed file checksum algorithm. It's designed > for replicated block. > {{DFSClient.getFileChecksum()}} need some updates, so it can work for striped > block group. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9038) DFS reserved space is erroneously counted towards non-DFS used.
[ https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105047#comment-15105047 ] Vinayakumar B commented on HDFS-9038: - Ping. Any update on this guys? > DFS reserved space is erroneously counted towards non-DFS used. > --- > > Key: HDFS-9038 > URL: https://issues.apache.org/jira/browse/HDFS-9038 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1 >Reporter: Chris Nauroth >Assignee: Brahma Reddy Battula > Attachments: GetFree.java, HDFS-9038-002.patch, HDFS-9038-003.patch, > HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038-006.patch, > HDFS-9038-007.patch, HDFS-9038.patch > > > HDFS-5215 changed the DataNode volume available space calculation to consider > the reserved space held by the {{dfs.datanode.du.reserved}} configuration > property. As a side effect, reserved space is now counted towards non-DFS > used. I don't believe it was intentional to change the definition of non-DFS > used. This issue proposes restoring the prior behavior: do not count > reserved space towards non-DFS used. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9425) Expose number of blocks per volume as a metric
[ https://issues.apache.org/jira/browse/HDFS-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105072#comment-15105072 ] Vinayakumar B commented on HDFS-9425: - patch looks good. Have some comments. 1. Need to consider blocks which are saved as part of lazy persist. You can do this, in below way. a. Add a mothod in {{FSVolumeImpl#incDfsUsedAndNumBlocks(..)}}. b. Inside you can call, both {{FSVolumeImpl#incDfsUsed(..)}} and {{BP#incrNumblocks()}} c. Make sure {{FSVolumeImpl#incDfsUsedAndNumBlocks(..)}} is called instead of {{FSVolumeImpl#incDfsUsed(..)}} in {{FsDatasetImpl#onCompleteLazyPersist(..)}} 2. Nit: {{testDataNodeMXBeaNBlockCount()}}, Make the camel case correct. > Expose number of blocks per volume as a metric > -- > > Key: HDFS-9425 > URL: https://issues.apache.org/jira/browse/HDFS-9425 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9425.patch > > > It will be helpful for user to know the usage in number of blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9608) Disk IO imbalance in HDFS with heterogeneous storages
[ https://issues.apache.org/jira/browse/HDFS-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zhou updated HDFS-9608: --- Attachment: HDFS-9608.05.patch Fix some test failure issues. > Disk IO imbalance in HDFS with heterogeneous storages > - > > Key: HDFS-9608 > URL: https://issues.apache.org/jira/browse/HDFS-9608 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Wei Zhou >Assignee: Wei Zhou > Attachments: HDFS-9608.01.patch, HDFS-9608.02.patch, > HDFS-9608.03.patch, HDFS-9608.04.patch, HDFS-9608.05.patch > > > Currently RoundRobinVolumeChoosingPolicy use a shared index to choose volumes > in HDFS with heterogeneous storages, this leads to non-RR choosing mode for > certain type of storage. > Besides, it uses a shared lock for synchronization which limits the > concurrency of volume choosing process. Volume choosing threads that > operating on different storage types should be able to run concurrently. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-9653: -- Status: Patch Available (was: In Progress) > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-9653 started by Weiwei Yang. - > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
[ https://issues.apache.org/jira/browse/HDFS-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Shaohui updated HDFS-9656: -- Attachment: HDFS-9656-v1.patch Add Hadoop-tools jars to the classpath of hadoop commands > Hadoop-tools jars should be included in the classpath of hadoop command > --- > > Key: HDFS-9656 > URL: https://issues.apache.org/jira/browse/HDFS-9656 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 3.0.0 > > Attachments: HDFS-9656-v1.patch > > > Currently, jars under Hadoop-tools dir are not be included in the classpath > of hadoop command. So we will fail to execute cmds about wasb or s3 file > systems. > {quote} > $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ > ls: No FileSystem for scheme: wasb > {quote} > A simple solution is to add those jars into the classpath of the cmds. > Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105050#comment-15105050 ] Brahma Reddy Battula commented on HDFS-9629: [~xiaochen] thanks for working on this,IMHO,better to keep the hard coded value. > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
Liu Shaohui created HDFS-9656: - Summary: Hadoop-tools jars should be included in the classpath of hadoop command Key: HDFS-9656 URL: https://issues.apache.org/jira/browse/HDFS-9656 Project: Hadoop HDFS Issue Type: Bug Reporter: Liu Shaohui Assignee: Liu Shaohui Fix For: 3.0.0 Currently, jars under Hadoop-tools dir are not be included in the classpath of hadoop command. So we will fail to execute cmds about wasb or s3 file systems. {quote} $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ ls: No FileSystem for scheme: wasb {quote} A simple solution is to add those jars into the classpath of the cmds. Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
[ https://issues.apache.org/jira/browse/HDFS-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15104987#comment-15104987 ] Liu Shaohui commented on HDFS-9656: --- [~cnauroth] [~chuanliu] Could you help to review this patch? Thanks~ > Hadoop-tools jars should be included in the classpath of hadoop command > --- > > Key: HDFS-9656 > URL: https://issues.apache.org/jira/browse/HDFS-9656 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 3.0.0 > > Attachments: HDFS-9656-v1.patch > > > Currently, jars under Hadoop-tools dir are not be included in the classpath > of hadoop command. So we will fail to execute cmds about wasb or s3 file > systems. > {quote} > $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ > ls: No FileSystem for scheme: wasb > {quote} > A simple solution is to add those jars into the classpath of the cmds. > Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HDFS-9152) Get input/output error while copying 800 small files to NFS Gateway mount point
[ https://issues.apache.org/jira/browse/HDFS-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-9152 started by Weiwei Yang. - > Get input/output error while copying 800 small files to NFS Gateway mount > point > > > Key: HDFS-9152 > URL: https://issues.apache.org/jira/browse/HDFS-9152 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: nfsgateway > Attachments: DNErrors.log, NNErrors.log > > > We have around *800 3-5K* files on local file system, we have nfs gateway > mounted on */hdfs/*, when we tried to copy these files to HDFS by > *cp ~/userdata/* /hdfs/user/cqdemo/demo3.data/* > most of files are failed because of > cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011220.csv': > Input/output error > cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011221.csv': > Input/output error > cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011222.csv': > Input/output error > for same set of files, I tried to use hadoop dfs -put command to do the copy, > it works fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9630) DistCp minor refactoring and clean up
[ https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105030#comment-15105030 ] Kai Zheng commented on HDFS-9630: - Thanks [~zhz] for the help update, review and commit! > DistCp minor refactoring and clean up > - > > Key: HDFS-9630 > URL: https://issues.apache.org/jira/browse/HDFS-9630 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Kai Zheng >Assignee: Kai Zheng >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch > > > While working on HDFS-9613, it was found there are various checking style > issues and minor things to clean up in {{DistCp}}. Better to handle them > separately so the fix can be in earlier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-9653: -- Description: HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, propose to expose this from hdfs dfsadmin -report as well. This is useful when hadoop admin was not able to access UI (e.g on cloud), he/she can directly use command to retrieve this information. (was: HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, propose to expose this from hdfs dfsadmin -report as well) > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-9653: -- Attachment: HDFS-9653.001.patch Submitted patch, it is a pretty straight forward change, please kindly help to review. Thanks > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9657) Schedule EC tasks at proper time to reduce the impact of recovery traffic
Li Bo created HDFS-9657: --- Summary: Schedule EC tasks at proper time to reduce the impact of recovery traffic Key: HDFS-9657 URL: https://issues.apache.org/jira/browse/HDFS-9657 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Li Bo Assignee: Li Bo The EC recover tasks consume a lot of network bandwidth and disk I/O. Recovering a corrupt block requires transferring 6 blocks , hence creating a 6X overhead in network bandwidth and disk I/O. When a datanode fails , the recovery of the whole blocks on this datanode may use up the network bandwith. We need to start a recovery task at a proper time in order to give less impact to the system. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9629: Attachment: HDFS-9629.02.patch > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch, HDFS-9629.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9645) DiskBalancer : Add Query RPC
[ https://issues.apache.org/jira/browse/HDFS-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9645: --- Attachment: HDFS-9645-HDFS-1312.002.patch bq. I have a question: we set the result as long in WorkStatus, while use optional uint32 in protobuf. Is it on purpose? [~liuml07] Thanks for the comment. I had logically reasoned that to be able to use all 32 bits of an unsigned int 32, java int would not be right choice since it has only 31 bits that are usable. So I choose the next logically higher data type that can hold all values correctly. But I do see that protoc compiler thinks quite differently and generates a value of int in java - with 31 bits instead of using a larger value. I also found a thread which asks exactly the same question in protoc groups. https://groups.google.com/forum/#!topic/protobuf/V4iPMgsoEtI . This patch changes long to int in java code. > DiskBalancer : Add Query RPC > > > Key: HDFS-9645 > URL: https://issues.apache.org/jira/browse/HDFS-9645 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9645-HDFS-1312.001.patch, > HDFS-9645-HDFS-1312.002.patch > > > Add query RPC, which reports the status of an executing plan. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9534) Add CLI command to clear storage policy from a path.
[ https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105595#comment-15105595 ] Xiaobing Zhou commented on HDFS-9534: - Thanks [~walter.k.su], do you think what semantic of 'removeStoragePolicy' should be? My understanding is to set it to UNSPECIFIED_STORAGE_POLICY_ID in a way like the dir/file is newly created without storage policy specified. I was also thinking of setting it to default storage policy, but the former is better IMO. > Add CLI command to clear storage policy from a path. > > > Key: HDFS-9534 > URL: https://issues.apache.org/jira/browse/HDFS-9534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Reporter: Chris Nauroth >Assignee: Xiaobing Zhou > Attachments: HDFS-9534.001.patch > > > The {{hdfs storagepolicies}} command has sub-commands for > {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path. However, there is > no {{-removeStoragePolicy}} to remove a previously set storage policy on a > path. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105529#comment-15105529 ] Xiao Chen commented on HDFS-9629: - Thanks [~brahmareddy] for the input. Patch 2 hard codes 2016. > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch, HDFS-9629.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105530#comment-15105530 ] Hadoop QA commented on HDFS-9629: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 35s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12782904/HDFS-9629.02.patch | | JIRA Issue | HDFS-9629 | | Optional Tests | asflicense | | uname | Linux b3cecc53768b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d40859f | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Max memory used | 30MB | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14148/console | This message was automatically generated. > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch, HDFS-9629.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9608) Disk IO imbalance in HDFS with heterogeneous storages
[ https://issues.apache.org/jira/browse/HDFS-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105183#comment-15105183 ] Hadoop QA commented on HDFS-9608: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 24s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 28s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 165m 16s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.shortcircuit.TestShortCircuitCache | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12782830/HDFS-9608.05.patch | | JIRA Issue | HDFS-9608 | | Optional Tests | asflicense compile javac
[jira] [Updated] (HDFS-9658) Erasure Coding: make tests compatible with multiple EC policies [Part 1]
[ https://issues.apache.org/jira/browse/HDFS-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HDFS-9658: - Attachment: HDFS-9658.1.patch > Erasure Coding: make tests compatible with multiple EC policies [Part 1] > > > Key: HDFS-9658 > URL: https://issues.apache.org/jira/browse/HDFS-9658 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Rui Li > Attachments: HDFS-9658.1.patch > > > Currently many of the EC-related tests assume we're using RS-6-3 > schema/policy. There're lots of hard coded fields as well as computations > based on that. To support multiple EC policies, we need to remove these hard > coded logic and make the tests more flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9659) EditLogTailerThread to Active Namenode RPC should timeout
[ https://issues.apache.org/jira/browse/HDFS-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105226#comment-15105226 ] Surendra Singh Lilhore commented on HDFS-9659: -- Attached initial patch Please review.. > EditLogTailerThread to Active Namenode RPC should timeout > - > > Key: HDFS-9659 > URL: https://issues.apache.org/jira/browse/HDFS-9659 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, namenode >Affects Versions: 3.0.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Critical > Attachments: HDFS-9659.patch > > > {{EditLogTailerThread}} to Active {{Namenode}} RPC doesn't have timeout and > it’s removed in HDFS-6440. > When inject the disk slow and consume system IO to the active name node, the > nameservice can't switch and this is because SNN not able to stop > {{EditLogTailerThread}}. > *Thread dump from SNN* > {noformat} > "IPC Server handler 33 on 25000" #118 daemon prio=5 os_prio=0 > tid=0x7f2384409800 nid=0x26c89 in Object.wait() [0x7f2376ac7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Thread.join(Thread.java:1245) > - locked <0x0006d517f538> (a > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread) > at java.lang.Thread.join(Thread.java:1319) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.stop(EditLogTailer.java:183) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopStandbyServices(FSNamesystem.java:1284) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopStandbyServices(NameNode.java:1852) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.exitState(StandbyState.java:72) > at > org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:62) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1684) > {noformat} > *Thread dump for {{EditLogTailerThread}}*, it is stuck in > {{NamenodeProtocolTranslatorPB.rollEditLog()}} rpc call. > {noformat} > "Edit log tailer" #150 prio=5 os_prio=0 tid=0x7f2395569800 nid=0x26cac in > Object.wait() [0x7f2374aa7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:502) > at org.apache.hadoop.ipc.Client.call(Client.java:1503) > - locked <0x0006d581bb90> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1448) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:420) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9658) Erasure Coding: make tests compatible with multiple EC policies [Part 1]
Rui Li created HDFS-9658: Summary: Erasure Coding: make tests compatible with multiple EC policies [Part 1] Key: HDFS-9658 URL: https://issues.apache.org/jira/browse/HDFS-9658 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Rui Li Assignee: Rui Li Currently many of the EC-related tests assume we're using RS-6-3 schema/policy. There're lots of hard coded fields as well as computations based on that. To support multiple EC policies, we need to remove these hard coded logic and make the tests more flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7866) Erasure coding: NameNode manages multiple erasure coding policies
[ https://issues.apache.org/jira/browse/HDFS-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105209#comment-15105209 ] Rui Li commented on HDFS-7866: -- Thanks Kai for the suggestions. I just filed HDFS-9658 to break this task into small pieces. Will create more as we progress. > Erasure coding: NameNode manages multiple erasure coding policies > - > > Key: HDFS-7866 > URL: https://issues.apache.org/jira/browse/HDFS-7866 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rui Li > Attachments: HDFS-7866-v1.patch, HDFS-7866-v2.patch, > HDFS-7866-v3.patch, HDFS-7866.4.patch, HDFS-7866.5.patch, HDFS-7866.6.patch, > HDFS-7866.7.patch > > > This is to extend NameNode to load, list and sync predefine EC schemas in > authorized and controlled approach. The provided facilities will be used to > implement DFSAdmin commands so admin can list available EC schemas, then > could choose some of them for target EC zones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9659) EditLogTailerThread to Active Namenode RPC should timeout
Surendra Singh Lilhore created HDFS-9659: Summary: EditLogTailerThread to Active Namenode RPC should timeout Key: HDFS-9659 URL: https://issues.apache.org/jira/browse/HDFS-9659 Project: Hadoop HDFS Issue Type: Bug Components: ha, namenode Affects Versions: 3.0.0 Reporter: Surendra Singh Lilhore Assignee: Surendra Singh Lilhore Priority: Critical {{EditLogTailerThread}} to Active {{Namenode}} RPC doesn't have timeout and it’s removed in HDFS-6440. When inject the disk slow and consume system IO to the active name node, the nameservice can't switch and this is because SNN not able to stop {{EditLogTailerThread}}. *Thread dump from SNN* {noformat} "IPC Server handler 33 on 25000" #118 daemon prio=5 os_prio=0 tid=0x7f2384409800 nid=0x26c89 in Object.wait() [0x7f2376ac7000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1245) - locked <0x0006d517f538> (a org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread) at java.lang.Thread.join(Thread.java:1319) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.stop(EditLogTailer.java:183) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopStandbyServices(FSNamesystem.java:1284) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopStandbyServices(NameNode.java:1852) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.exitState(StandbyState.java:72) at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:62) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1684) {noformat} *Thread dump for {{EditLogTailerThread}}*, it is stuck in {{NamenodeProtocolTranslatorPB.rollEditLog()}} rpc call. {noformat} "Edit log tailer" #150 prio=5 os_prio=0 tid=0x7f2395569800 nid=0x26cac in Object.wait() [0x7f2374aa7000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.ipc.Client.call(Client.java:1503) - locked <0x0006d581bb90> (a org.apache.hadoop.ipc.Client$Call) at org.apache.hadoop.ipc.Client.call(Client.java:1448) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:420) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9658) Erasure Coding: make tests compatible with multiple EC policies [Part 1]
[ https://issues.apache.org/jira/browse/HDFS-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HDFS-9658: - Status: Patch Available (was: Open) > Erasure Coding: make tests compatible with multiple EC policies [Part 1] > > > Key: HDFS-9658 > URL: https://issues.apache.org/jira/browse/HDFS-9658 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Rui Li > Attachments: HDFS-9658.1.patch > > > Currently many of the EC-related tests assume we're using RS-6-3 > schema/policy. There're lots of hard coded fields as well as computations > based on that. To support multiple EC policies, we need to remove these hard > coded logic and make the tests more flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105172#comment-15105172 ] Hadoop QA commented on HDFS-9653: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 52s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 51m 41s {color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
[jira] [Updated] (HDFS-9659) EditLogTailerThread to Active Namenode RPC should timeout
[ https://issues.apache.org/jira/browse/HDFS-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-9659: - Attachment: HDFS-9659.patch > EditLogTailerThread to Active Namenode RPC should timeout > - > > Key: HDFS-9659 > URL: https://issues.apache.org/jira/browse/HDFS-9659 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, namenode >Affects Versions: 3.0.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Critical > Attachments: HDFS-9659.patch > > > {{EditLogTailerThread}} to Active {{Namenode}} RPC doesn't have timeout and > it’s removed in HDFS-6440. > When inject the disk slow and consume system IO to the active name node, the > nameservice can't switch and this is because SNN not able to stop > {{EditLogTailerThread}}. > *Thread dump from SNN* > {noformat} > "IPC Server handler 33 on 25000" #118 daemon prio=5 os_prio=0 > tid=0x7f2384409800 nid=0x26c89 in Object.wait() [0x7f2376ac7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Thread.join(Thread.java:1245) > - locked <0x0006d517f538> (a > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread) > at java.lang.Thread.join(Thread.java:1319) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.stop(EditLogTailer.java:183) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopStandbyServices(FSNamesystem.java:1284) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopStandbyServices(NameNode.java:1852) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.exitState(StandbyState.java:72) > at > org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:62) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1684) > {noformat} > *Thread dump for {{EditLogTailerThread}}*, it is stuck in > {{NamenodeProtocolTranslatorPB.rollEditLog()}} rpc call. > {noformat} > "Edit log tailer" #150 prio=5 os_prio=0 tid=0x7f2395569800 nid=0x26cac in > Object.wait() [0x7f2374aa7000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:502) > at org.apache.hadoop.ipc.Client.call(Client.java:1503) > - locked <0x0006d581bb90> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1448) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:420) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105514#comment-15105514 ] Weiwei Yang commented on HDFS-9653: --- The UT failure seems to be irrelevant to this patch. These 3 tests can run successfully with JDK 1.8.0_66 on my local environment. hadoop.hdfs.TestDFSUpgradeFromImage seems to be HDFS-9476. > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
[ https://issues.apache.org/jira/browse/HDFS-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-9656: --- Target Version/s: (was: 2.7.1) > Hadoop-tools jars should be included in the classpath of hadoop command > --- > > Key: HDFS-9656 > URL: https://issues.apache.org/jira/browse/HDFS-9656 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Attachments: HDFS-9656-v1.patch > > > Currently, jars under Hadoop-tools dir are not be included in the classpath > of hadoop command. So we will fail to execute cmds about wasb or s3 file > systems. > {quote} > $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ > ls: No FileSystem for scheme: wasb > {quote} > A simple solution is to add those jars into the classpath of the cmds. > Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
[ https://issues.apache.org/jira/browse/HDFS-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-9656: --- Fix Version/s: (was: 3.0.0) > Hadoop-tools jars should be included in the classpath of hadoop command > --- > > Key: HDFS-9656 > URL: https://issues.apache.org/jira/browse/HDFS-9656 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Attachments: HDFS-9656-v1.patch > > > Currently, jars under Hadoop-tools dir are not be included in the classpath > of hadoop command. So we will fail to execute cmds about wasb or s3 file > systems. > {quote} > $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ > ls: No FileSystem for scheme: wasb > {quote} > A simple solution is to add those jars into the classpath of the cmds. > Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9656) Hadoop-tools jars should be included in the classpath of hadoop command
[ https://issues.apache.org/jira/browse/HDFS-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105706#comment-15105706 ] Allen Wittenauer commented on HDFS-9656: -1 1) Adding hadoop-tools to the default path is going to break all sorts of user-level things due to the amount of transitive dependencies. 2) This will *greatly* impact the startup of time of commands by including a bunch of class files that will never get used. bq. A simple solution is to add those jars into the classpath of the cmds. Suggestions are welcomed~ Instead of slamming in everything, users should be using shell profiles to include the jars they actually need. Another thing that would be good to do is to break up the hadoop tools dir to be per component. > Hadoop-tools jars should be included in the classpath of hadoop command > --- > > Key: HDFS-9656 > URL: https://issues.apache.org/jira/browse/HDFS-9656 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Attachments: HDFS-9656-v1.patch > > > Currently, jars under Hadoop-tools dir are not be included in the classpath > of hadoop command. So we will fail to execute cmds about wasb or s3 file > systems. > {quote} > $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/ > ls: No FileSystem for scheme: wasb > {quote} > A simple solution is to add those jars into the classpath of the cmds. > Suggestions are welcomed~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens
[ https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105746#comment-15105746 ] Allen Wittenauer commented on HDFS-9525: I believe the issues have been dealt with. if there are no further comments, I'll commit this tomorrow. Thanks. > hadoop utilities need to support provided delegation tokens > --- > > Key: HDFS-9525 > URL: https://issues.apache.org/jira/browse/HDFS-9525 > Project: Hadoop HDFS > Issue Type: New Feature > Components: security >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer >Assignee: HeeSoo Kim >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, > HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, > HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, > HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch > > > When using the webhdfs:// filesystem (especially from distcp), we need the > ability to inject a delegation token rather than webhdfs initialize its own. > This would allow for cross-authentication-zone file system accesses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105992#comment-15105992 ] Ruslan Dautkhanov commented on HDFS-6255: - Chris, Thank you for prompt response. Yep, we have hadoop-fuse-dfs mount: $ grep hadoop /etc/fstab hadoop-fuse-dfs#dfs://epsdatalake /hdfs_mount fuse usetrash,rw 0 0 It is not picking up ACLs at all. Test 1 - doesn't work through fuse mount: $ ls -l /hdfs_mount/agility ls: cannot open directory /hdfs_mount/agility: Permission denied Test 2 - work through hadoop fs commands: $ hadoop fs -ls /agility/ Found 6 items . . . /skip 6 lines/ $ hadoop fs -ls / | grep agility dr-xr-x---+ - user1 group1 0 2016-01-14 13:25 /agility Hadoop/HDFS 2.6 (CDH 5.5.1), but it was always a problem for us for all older version we have used. > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9456) BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement
[ https://issues.apache.org/jira/browse/HDFS-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9456: Summary: BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement (was: BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement()) > BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement > -- > > Key: HDFS-9456 > URL: https://issues.apache.org/jira/browse/HDFS-9456 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Junping Du >Assignee: Xiaobing Zhou > > Per discussions in HDFS-9314, we need to override verifyBlockPlacement() in > BlockPlacementPolicyWithNodeGroup to reflect right block status. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9658) Erasure Coding: allow to use multiple EC policies in striping related tests
[ https://issues.apache.org/jira/browse/HDFS-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106105#comment-15106105 ] Rui Li commented on HDFS-9658: -- Test failures don't seem related and cannot be reproduced locally. > Erasure Coding: allow to use multiple EC policies in striping related tests > --- > > Key: HDFS-9658 > URL: https://issues.apache.org/jira/browse/HDFS-9658 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Rui Li > Attachments: HDFS-9658.1.patch > > > Currently many of the EC-related tests assume we're using RS-6-3 > schema/policy. There're lots of hard coded fields as well as computations > based on that. To support multiple EC policies, we need to remove these hard > coded logic and make the tests more flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9456) BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement()
[ https://issues.apache.org/jira/browse/HDFS-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9456: Summary: BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement() (was: BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement() from BlockPlacementPolicyDefault.) > BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement() > > > Key: HDFS-9456 > URL: https://issues.apache.org/jira/browse/HDFS-9456 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Junping Du >Assignee: Xiaobing Zhou > > Per discussions in HDFS-9314, we need to override verifyBlockPlacement() in > BlockPlacementPolicyWithNodeGroup to reflect right block status. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106052#comment-15106052 ] Ruslan Dautkhanov commented on HDFS-6255: - I jsut used Navigator to see if there are hdfs denials .. nope it does not have enything. So you're right, it looks like rejected directly at FUSE layer.. Do you know any possible workarounds for fuse to respect HDFS ACLs? Thank you for quick turnaround. > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9244) Support nested encryption zones
[ https://issues.apache.org/jira/browse/HDFS-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106091#comment-15106091 ] Arpit Agarwal commented on HDFS-9244: - Hi [~zhz], will this fix break the EZ trash support introduced by HDFS-8831? > Support nested encryption zones > --- > > Key: HDFS-9244 > URL: https://issues.apache.org/jira/browse/HDFS-9244 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiaoyu Yao >Assignee: Zhe Zhang > Attachments: HDFS-9244.00.patch, HDFS-9244.01.patch > > > This JIRA is opened to track adding support of nested encryption zone based > on [~andrew.wang]'s [comment > |https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141] > for certain use cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105977#comment-15105977 ] Chris Nauroth commented on HDFS-6255: - [~Tagar], can you please describe in detail the problem that you're seeing? Is it that ACLs are not being enforced the way you expect while accessing HDFS files through a fuse_dfs mount? If there are specific repro instructions, that would be perfect. Thanks! > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106040#comment-15106040 ] Chris Nauroth commented on HDFS-6255: - Thank you, [~Tagar]. For the {{ls}} command that failed with "Permission denied", do you have an entry in the HDFS audit log corresponding to that? If so, then could you please share it? If there is no line generated in the HDFS audit log from running that {{ls}}, then that means the request was rejected immediately at the FUSE layer, and it never actually attempted to communicate with HDFS. > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=13988286#comment-13988286 ] Colin Patrick McCabe edited comment on HDFS-6255 at 1/18/16 11:23 PM: -- Thanks for looking at this, Chris. Stephen, can you try again with {{\-oallow_other}} and confirm that it works? was (Author: cmccabe): Thanks for looking at this, Chris. Stephen, can you try again with {{\- oallow_other}} and confirm that it works? > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9456) BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement
[ https://issues.apache.org/jira/browse/HDFS-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9456: Status: Patch Available (was: Open) > BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement > -- > > Key: HDFS-9456 > URL: https://issues.apache.org/jira/browse/HDFS-9456 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Junping Du >Assignee: Xiaobing Zhou > Attachments: HDFS-9456.001.patch > > > Per discussions in HDFS-9314, we need to override verifyBlockPlacement() in > BlockPlacementPolicyWithNodeGroup to reflect right block status. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9456) BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement
[ https://issues.apache.org/jira/browse/HDFS-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9456: Attachment: HDFS-9456.001.patch Posted patch V001, kindly review. > BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement > -- > > Key: HDFS-9456 > URL: https://issues.apache.org/jira/browse/HDFS-9456 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Junping Du >Assignee: Xiaobing Zhou > Attachments: HDFS-9456.001.patch > > > Per discussions in HDFS-9314, we need to override verifyBlockPlacement() in > BlockPlacementPolicyWithNodeGroup to reflect right block status. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9244) Support nested encryption zones
[ https://issues.apache.org/jira/browse/HDFS-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106045#comment-15106045 ] Tsz Wo Nicholas Sze commented on HDFS-9244: --- > ... Rollback won't be allowed ... Rollback must always be allowed for any feature. It is for protecting user data against upgrade failure due to, mostly likely, user errors and, less likely, software bug in the new version. > Support nested encryption zones > --- > > Key: HDFS-9244 > URL: https://issues.apache.org/jira/browse/HDFS-9244 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiaoyu Yao >Assignee: Zhe Zhang > Attachments: HDFS-9244.00.patch, HDFS-9244.01.patch > > > This JIRA is opened to track adding support of nested encryption zone based > on [~andrew.wang]'s [comment > |https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141] > for certain use cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9094) Add command line option to ask NameNode reload configuration.
[ https://issues.apache.org/jira/browse/HDFS-9094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9094: Attachment: HDFS-9094-HDFS-9000.006.patch Patch V006 fixed the issues. Thanks [~arpitagarwal]. > Add command line option to ask NameNode reload configuration. > - > > Key: HDFS-9094 > URL: https://issues.apache.org/jira/browse/HDFS-9094 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-9094-HDFS-9000.002.patch, > HDFS-9094-HDFS-9000.003.patch, HDFS-9094-HDFS-9000.004.patch, > HDFS-9094-HDFS-9000.005.patch, HDFS-9094-HDFS-9000.006.patch, > HDFS-9094.001.patch > > > This work is going to add DFS admin command that allows reloading NameNode > configuration. This is sibling work related to HDFS-6808. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106145#comment-15106145 ] Eric Yang commented on HDFS-9653: - Can you generate one more patch for branch-2? Assuming that you want this in 2.x as well. > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9623) Update example configuration of block state change log in log4j.properties
[ https://issues.apache.org/jira/browse/HDFS-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HDFS-9623: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-2, and branch-2.8. Thanks [~iwasakims] for creating the patch, and thanks [~arpitagarwal] for the review. > Update example configuration of block state change log in log4j.properties > -- > > Key: HDFS-9623 > URL: https://issues.apache.org/jira/browse/HDFS-9623 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging >Affects Versions: 2.8.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-9623.001.patch > > > The log level of block state change log was changed from INFO to DEBUG by > HDFS-6860. The example configuration in log4j.properties should be updated > along with the change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9094) Add command line option to ask NameNode reload configuration.
[ https://issues.apache.org/jira/browse/HDFS-9094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106205#comment-15106205 ] Hadoop QA commented on HDFS-9094: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s {color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 301 unchanged - 4 fixed = 303 total (was 305) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 34s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 26s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 158m 32s {color} | {color:black} {color} | \\ \\
[jira] [Updated] (HDFS-9623) Update example configuration of block state change log in log4j.properties
[ https://issues.apache.org/jira/browse/HDFS-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HDFS-9623: Hadoop Flags: Reviewed Status: Patch Available (was: Open) > Update example configuration of block state change log in log4j.properties > -- > > Key: HDFS-9623 > URL: https://issues.apache.org/jira/browse/HDFS-9623 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging >Affects Versions: 2.8.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Attachments: HDFS-9623.001.patch > > > The log level of block state change log was changed from INFO to DEBUG by > HDFS-6860. The example configuration in log4j.properties should be updated > along with the change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9623) Update example configuration of block state change log in log4j.properties
[ https://issues.apache.org/jira/browse/HDFS-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106243#comment-15106243 ] Hudson commented on HDFS-9623: -- SUCCESS: Integrated in Hadoop-trunk-Commit #9135 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9135/]) HDFS-9623. Update example configuration of block state change log in (aajisaka: rev 92c5f565fd5466eab4496c2413de2e8b2897a91f) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/conf/log4j.properties > Update example configuration of block state change log in log4j.properties > -- > > Key: HDFS-9623 > URL: https://issues.apache.org/jira/browse/HDFS-9623 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging >Affects Versions: 2.8.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-9623.001.patch > > > The log level of block state change log was changed from INFO to DEBUG by > HDFS-6860. The example configuration in log4j.properties should be updated > along with the change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9456) BlockPlacementPolicyWithNodeGroup should override verifyBlockPlacement
[ https://issues.apache.org/jira/browse/HDFS-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106178#comment-15106178 ] Hadoop QA commented on HDFS-9456: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 7 new + 9 unchanged - 0 fixed = 16 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 20s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 1s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 170m 44s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark | | | hadoop.hdfs.server.datanode.TestBlockScanner | | JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark | | | hadoop.hdfs.TestMissingBlocksAlert | | | hadoop.hdfs.server.datanode.TestBlockScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL |
[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test
[ https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106200#comment-15106200 ] Hadoop QA commented on HDFS-6054: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 34s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 42s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 173m 4s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.datanode.TestBlockScanner | | JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | hadoop.hdfs.server.datanode.TestBlockScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL |
[jira] [Commented] (HDFS-9623) Update example configuration of block state change log in log4j.properties
[ https://issues.apache.org/jira/browse/HDFS-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106225#comment-15106225 ] Akira AJISAKA commented on HDFS-9623: - +1, committing this. > Update example configuration of block state change log in log4j.properties > -- > > Key: HDFS-9623 > URL: https://issues.apache.org/jira/browse/HDFS-9623 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging >Affects Versions: 2.8.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Attachments: HDFS-9623.001.patch > > > The log level of block state change log was changed from INFO to DEBUG by > HDFS-6860. The example configuration in log4j.properties should be updated > along with the change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9623) Update example configuration of block state change log in log4j.properties
[ https://issues.apache.org/jira/browse/HDFS-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106235#comment-15106235 ] Hadoop QA commented on HDFS-9623: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 4m 26s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12780900/HDFS-9623.001.patch | | JIRA Issue | HDFS-9623 | | Optional Tests | asflicense mvnsite unit | | uname | Linux 2d7d9fd15981 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a44ce3f | | JDK v1.7.0_91 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/14155/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Max memory used | 34MB | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14155/console | This message was automatically generated. > Update example configuration of block state change log in log4j.properties > -- > > Key: HDFS-9623 > URL: https://issues.apache.org/jira/browse/HDFS-9623 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging >Affects Versions: 2.8.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Attachments: HDFS-9623.001.patch > > > The log level of block state change log was changed from INFO to DEBUG by > HDFS-6860. The example configuration in log4j.properties should be updated > along with the change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-9653: -- Attachment: HDFS-9653-branch-2.001.patch Submit a patch for branch-2 > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653-branch-2.001.patch, HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106353#comment-15106353 ] Akira AJISAKA commented on HDFS-9629: - I'm +1 for hard-coding the year because Xiao's concern is big. I thought that we can set the year dynamically by showing the year when the source code is built. However, it is not a good idea because if you get a source code released in 2016 and built it in 2017, the UI shows 2017. It is confusing. By the way, can we add a parameter and use this value instead of changing 5 files every year? > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch, HDFS-9629.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9660) TestBlockScanner.testScanRateLimit fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-9660: Description: Regression org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanRateLimit fails with the following error. Seeing the same error in several builds: https://builds.apache.org/job/PreCommit-HDFS-Build/14153/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockScanner/testScanRateLimit/ http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C2106732189.1429.1450359106291.JavaMail.jenkins@crius%3E http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C1561861624.1283.1451521640509.JavaMail.jenkins@crius%3E {code} Failing for the past 1 build (Since Unstable#14153 ) Took 0.37 sec. Error Message Cannot remove data directory: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/datapath '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2 permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project': absolute:/testptch/hadoop/hadoop-hdfs-project permissions: drwx path '/testptch/hadoop': absolute:/testptch/hadoop permissions: drwx path '/testptch': absolute:/testptch permissions: dr-x path '/': absolute:/ permissions: dr-x {code} The stack trace: {code} at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:834) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441) at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:96) at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanRateLimit(TestBlockScanner.java:439) {code} was: Regression org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanRateLimit fails with the following error. Seeing the same error in several builds: https://builds.apache.org/job/PreCommit-HDFS-Build/14153/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockScanner/testScanRateLimit/ http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C2106732189.1429.1450359106291.JavaMail.jenkins@crius%3E http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C1561861624.1283.1451521640509.JavaMail.jenkins@crius%3E {code} Failing for the past 1 build (Since Unstable#14153 ) Took 0.37 sec. Error Message Cannot remove data directory: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/datapath '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2 permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project': absolute:/testptch/hadoop/hadoop-hdfs-project permissions: drwx path
[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test
[ https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106360#comment-15106360 ] Yongjun Zhang commented on HDFS-6054: - Filed HDFS-9660 for TestBlockScanner.testScanRateLimit failure. The other two are HDFS-9591 and HDFS-9601 respectively. > MiniQJMHACluster should not use static port to avoid binding failure in unit > test > - > > Key: HDFS-6054 > URL: https://issues.apache.org/jira/browse/HDFS-6054 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Reporter: Brandon Li >Assignee: Yongjun Zhang > Labels: BB2015-05-TBR > Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch, > HDFS-6054.003.patch > > > One example of the test failues: TestFailureToReadEdits > {noformat} > Error Message > Port in use: localhost:10003 > Stacktrace > java.net.BindException: Port in use: localhost:10003 > at sun.nio.ch.Net.bind(Native Method) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > at > org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:650) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:635) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !image-hdfs-9661-jstack.png|align=right, vspace=4! was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > !image-hdfs-9661-jstack.png|align=right, vspace=4! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Assignee: Vinayakumar B (was: ade) > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: Vinayakumar B > Fix For: 2.7.2 > > Attachments: HDFS-9661.0.patch, hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and > createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is > !hdfs-9661-jstack.gif.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9660) TestBlockScanner.testScanRateLimit fails intermittently
Yongjun Zhang created HDFS-9660: --- Summary: TestBlockScanner.testScanRateLimit fails intermittently Key: HDFS-9660 URL: https://issues.apache.org/jira/browse/HDFS-9660 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Yongjun Zhang Regression org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanRateLimit fails with the following error. Seeing the same error in several builds: https://builds.apache.org/job/PreCommit-HDFS-Build/14153/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockScanner/testScanRateLimit/ http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C2106732189.1429.1450359106291.JavaMail.jenkins@crius%3E http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201512.mbox/%3C1561861624.1283.1451521640509.JavaMail.jenkins@crius%3E {code} Failing for the past 1 build (Since Unstable#14153 ) Took 0.37 sec. Error Message Cannot remove data directory: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/datapath '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2 permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs': absolute:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs permissions: drwx path '/testptch/hadoop/hadoop-hdfs-project': absolute:/testptch/hadoop/hadoop-hdfs-project permissions: drwx path '/testptch/hadoop': absolute:/testptch/hadoop permissions: drwx path '/testptch': absolute:/testptch permissions: dr-x path '/': absolute:/ permissions: dr-x {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
ade created HDFS-9661: - Summary: Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw Key: HDFS-9661 URL: https://issues.apache.org/jira/browse/HDFS-9661 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.7.1, 2.7.0, 2.8.0, 2.7.2 Reporter: ade Assignee: ade Fix For: 2.7.2 We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !image-hdfs-9661-jstack.gif! was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > !image-hdfs-9661-jstack.gif! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Attachment: hdfs-9661-jstack.gif.png > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > !image-hdfs-9661-jstack.gif! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !hdfs-9661-jstack.gif! was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !image-hdfs-9661-jstack.gif! > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > !hdfs-9661-jstack.gif! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !hdfs-9661-jstack.gif.png! was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !hdfs-9661-jstack.gif! > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is > !hdfs-9661-jstack.gif.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is !hdfs-9661-jstack.gif.png! was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !hdfs-9661-jstack.gif.png! > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and > createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is > !hdfs-9661-jstack.gif.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Status: Patch Available (was: Open) > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1, 2.7.0, 2.8.0, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and > createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is > !hdfs-9661-jstack.gif.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command
[ https://issues.apache.org/jira/browse/HDFS-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106311#comment-15106311 ] Weiwei Yang commented on HDFS-9653: --- Thanks Eric. I just submitted a patch for branch-2. > Expose the number of blocks pending deletion through dfsadmin report command > > > Key: HDFS-9653 > URL: https://issues.apache.org/jira/browse/HDFS-9653 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.7.1 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-9653-branch-2.001.patch, HDFS-9653.001.patch > > > HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, > propose to expose this from hdfs dfsadmin -report as well. This is useful > when hadoop admin was not able to access UI (e.g on cloud), he/she can > directly use command to retrieve this information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9534) Add CLI command to clear storage policy from a path.
[ https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106366#comment-15106366 ] Jing Zhao commented on HDFS-9534: - Currently unspecified policy means using the default policy. Thus I do not think we need an explicit policy named "UNSPECIFIED". In the meanwhile, how should the semantic of the remove op look like? If we have set a storage policy on "/foo", then should we allow the user to apply the command on "/foo/bar" if bar is not associated with any explicit policy? Or what if we have nested policy settings? I think we may need to list all the scenarios and clearly define their semantic first. > Add CLI command to clear storage policy from a path. > > > Key: HDFS-9534 > URL: https://issues.apache.org/jira/browse/HDFS-9534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Reporter: Chris Nauroth >Assignee: Xiaobing Zhou > Attachments: HDFS-9534.001.patch > > > The {{hdfs storagepolicies}} command has sub-commands for > {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path. However, there is > no {{-removeStoragePolicy}} to remove a previously set storage policy on a > path. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Description: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is was: We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is !image-hdfs-9661-jstack.png|align=right, vspace=4! > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > > We found a deadlock in dn.FsDatasetImpl. The dn's jstack result is -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
[ https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ade updated HDFS-9661: -- Attachment: HDFS-9661.0.patch > Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw > - > > Key: HDFS-9661 > URL: https://issues.apache.org/jira/browse/HDFS-9661 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2 >Reporter: ade >Assignee: ade > Fix For: 2.7.2 > > Attachments: HDFS-9661.0.patch, hdfs-9661-jstack.gif.png > > > We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and > createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is > !hdfs-9661-jstack.gif.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9645) DiskBalancer : Add Query RPC
[ https://issues.apache.org/jira/browse/HDFS-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105783#comment-15105783 ] Hadoop QA commented on HDFS-9645: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 13s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 55s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} |
[jira] [Commented] (HDFS-9534) Add CLI command to clear storage policy from a path.
[ https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105854#comment-15105854 ] Arpit Agarwal commented on HDFS-9534: - Hi Walter, Xiaobing, this approach looks fine to me. I see no harm in exposing UNSPECIFIED_STORAGE_POLICY_NAME. It does simplify the implementation a bit. [~szetszwo]/[~jingzhao], do you have any opinion on this approach? > Add CLI command to clear storage policy from a path. > > > Key: HDFS-9534 > URL: https://issues.apache.org/jira/browse/HDFS-9534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Reporter: Chris Nauroth >Assignee: Xiaobing Zhou > Attachments: HDFS-9534.001.patch > > > The {{hdfs storagepolicies}} command has sub-commands for > {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path. However, there is > no {{-removeStoragePolicy}} to remove a previously set storage policy on a > path. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases
[ https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105940#comment-15105940 ] Ruslan Dautkhanov commented on HDFS-6255: - Hi Chris Nauroth, > In its default configuration, fuse mounts are only accessible by one user: > the user who performed the mount We have a kerberized cluster and hdfs fuse mounts use whoever accessed that mount. It uses Kerberos authentication properly. Although we still have problem that hdfs fuse mounts don't use ACLs, it's only basic access permissions (normal UNIX's owner- group- other permission that count). We still think that there is a problem and it would be great if somebody could have a look at this bug. Thank you. > fuse_dfs will not adhere to ACL permissions in some cases > - > > Key: HDFS-6255 > URL: https://issues.apache.org/jira/browse/HDFS-6255 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 3.0.0, 2.4.0 >Reporter: Stephen Chu >Assignee: Chris Nauroth > > As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. > Then I set a new acl group:jenkins:rwx on /tmp/acl_dir. > {code} > jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir > # file: /tmp/acl_dir > # owner: hdfs > # group: supergroup > user::rwx > group::--- > group:jenkins:rwx > mask::rwx > other::--- > {code} > Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create > a file and directory inside. > {code} > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1 > hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/ > Found 2 items > drwxr-xr-x - jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testdir1 > -rw-r--r-- 1 jenkins supergroup 0 2014-04-17 19:11 > /tmp/acl_dir/testfile1 > [jenkins@hdfs-vanilla-1 ~]$ > {code} > However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a > fuse_dfs mount, I get permission denied. Same permission denied when I try to > create or list files. > {code} > [jenkins@hdfs-vanilla-1 tmp]$ ls -l > total 16 > drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir > drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2 > drwxr-xr-x 3 mapred nobody 4096 Mar 11 03:53 mapred > drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli > -rwx-- 1 hdfsnobody0 Apr 7 17:18 tf1 > [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir > bash: cd: acl_dir: Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2 > touch: cannot touch `acl_dir/testfile2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2 > mkdir: cannot create directory `acl_dir/testdir2': Permission denied > [jenkins@hdfs-vanilla-1 tmp]$ > {code} > The fuse_dfs debug output doesn't show any error for the above operations: > {code} > unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48 >unique: 18, success, outsize: 32 > unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80 > readdir[0] from 0 >unique: 19, success, outsize: 312 > unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 20, success, outsize: 120 > unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80 >unique: 21, success, outsize: 16 > unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64 >unique: 22, success, outsize: 16 > unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56 > getattr /tmp >unique: 23, success, outsize: 120 > unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 24, success, outsize: 120 > unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 25, success, outsize: 120 > unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 26, success, outsize: 120 > unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 27, success, outsize: 120 > unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56 > getattr /tmp/acl_dir >unique: 28, success, outsize: 120 > {code} > In other scenarios, ACL permissions are enforced successfully. For example, > as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set > the acl user:jenkins:--- on the directory. On the fuse mount, I am not able > to ls, mkdir, or touch to that directory as jenkins user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)