[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656363#comment-15656363 ] Wei-Chiu Chuang commented on HDFS-11056: The warnings and test errors looks unrelated. > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.7.patch, HDFS-11056.branch-2.patch, > HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient - Found Checksum error for >
[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations
[ https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656266#comment-15656266 ] Hadoop QA commented on HDFS-10930: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 18 new + 482 unchanged - 12 fixed = 500 total (was 494) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Non-virtual method call in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File, long) passes null for non-null parameter of new org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams(InputStream, InputStream, FsVolumeReference) At BlockPoolSlice.java:long) passes null for non-null parameter of new org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams(InputStream, InputStream, FsVolumeReference) At BlockPoolSlice.java:[line 720] | | Failed junit tests | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.TestDataTransferProtocol | | | hadoop.hdfs.TestReadWhileWriting | | | hadoop.fs.TestSWebHdfsFileContextMainOperations | | | hadoop.fs.TestHDFSFileContextMainOperations | | | hadoop.hdfs.TestFileAppend2 | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory | | | hadoop.hdfs.TestGetFileChecksum | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.server.namenode.TestAddBlock | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles | | | hadoop.hdfs.TestFileAppendRestart | |
[jira] [Comment Edited] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656197#comment-15656197 ] Jingcheng Du edited comment on HDFS-9668 at 11/11/16 5:29 AM: -- Thanks a lot for the comments [~arpitagarwal]! bq. Can we separate the block-lock addition from the read-write lock? In the first patch we should focus on converting the exclusive lock to a read-write lock only. I can help split up the patch if you'd like. Yes, we can, but I am afraid there might be lots of write lock and only few read locks in the first patch, and we might have to do the similar things in the second patch like what we do now -change part of the write lock to read lock- in this JIRA? The purpose of this existing patch is trying to isolate the locks between blocks in write-heavy cases, so we need a block-related lock to do that. But the operations on block pool cannot use this block-related lock, so we need a read-write lock to synchronized the operations in block pools and blocks - the write locks for operations on block pools and read locks for operations in blocks. I guess splitting this into two patches can reduce the work in the first patch, but we probably still have to the same thing with this patch in the second patch. bq. DirectoryScanner.java:391 - We can just get the read lock here. This phase of the directory scanner makes no changes to the DataNode state. RIght, this doesn't chang the DN state, but what if other places change blocks? Should we protect the read code from that? bq. FsDatasetImpl.java:1117 - This should get the write lock as it calls appendImpl which modifies the volumeMap. bq. FsDatasetImpl.java:1268 - This too calls appendImpl, so it should get the write lock. The volume map has a mutex to synchronize the operations on the map, I guess a outside read lock and a block-related lock are enough? Thanks! was (Author: jingcheng...@intel.com): Thanks a lot for the comments [~arpitagarwal]! bq. Can we separate the block-lock addition from the read-write lock? In the first patch we should focus on converting the exclusive lock to a read-write lock only. I can help split up the patch if you'd like. Yes, we can, but I am afraid there might be lots of write lock and only few read locks in the first patch, and we might have to do the similar things in the second patch like what we do now -change part of the write lock to read lock- in this JIRA? The purpose of this existing patch is trying to isolate the locks between blocks in write-heavy cases, so we need a block-related lock to do that. But the operations on block pool cannot use this block-related lock, so we need a read-write lock to synchronized the operations in block pools and blocks - the write locks for operations on block pools and read locks for operations in blocks. I guess splitting this into two patches can reduce the work in the first patch, but we probably still have to the same thing with this patch in the second patch. bq. DirectoryScanner.java:391 - We can just get the read lock here. This phase of the directory scanner makes no changes to the DataNode state. RIght, this doesn't chang the DN state, but what if other places change blocks? Should we protect the read code from that? bq. FsDatasetImpl.java:1117 - This should get the write lock as it calls appendImpl which modifies the volumeMap. bq. FsDatasetImpl.java:1268 - This too calls appendImpl, so it should get the write lock. The volume map has a mutex to synchronize the operations on the map, I guess a outside read lock is enough? Thanks! > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, > HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, > HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block >
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656197#comment-15656197 ] Jingcheng Du commented on HDFS-9668: Thanks a lot for the comments [~arpitagarwal]! bq. Can we separate the block-lock addition from the read-write lock? In the first patch we should focus on converting the exclusive lock to a read-write lock only. I can help split up the patch if you'd like. Yes, we can, but I am afraid there might be lots of write lock and only few read locks in the first patch, and we might have to do the similar things in the second patch like what we do now -change part of the write lock to read lock- in this JIRA? The purpose of this existing patch is trying to isolate the locks between blocks in write-heavy cases, so we need a block-related lock to do that. But the operations on block pool cannot use this block-related lock, so we need a read-write lock to synchronized the operations in block pools and blocks - the write locks for operations on block pools and read locks for operations in blocks. I guess splitting this into two patches can reduce the work in the first patch, but we probably still have to the same thing with this patch in the second patch. bq. DirectoryScanner.java:391 - We can just get the read lock here. This phase of the directory scanner makes no changes to the DataNode state. RIght, this doesn't chang the DN state, but what if other places change blocks? Should we protect the read code from that? bq. FsDatasetImpl.java:1117 - This should get the write lock as it calls appendImpl which modifies the volumeMap. bq. FsDatasetImpl.java:1268 - This too calls appendImpl, so it should get the write lock. The volume map has a mutex to synchronize the operations on the map, I guess a outside read lock is enough? Thanks! > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, > HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, > HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at >
[jira] [Commented] (HDFS-11116) Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656147#comment-15656147 ] Hudson commented on HDFS-6: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10816 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10816/]) HDFS-6. Fix javac warnings caused by deprecation of APIs in (aajisaka: rev 8848a8a76c7eadebb15b347171057f906f6fc69b) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java > Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue > -- > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-6: - Resolution: Fixed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) Committed this to trunk. Thanks [~linyiqun] for the contribution. > Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue > -- > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations
[ https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-10930: -- Attachment: HDFS-10930.04.patch Update the patch, clean up the instrumentation from the refactor work. > Refactor: Wrap Datanode IO related operations > - > > Key: HDFS-10930 > URL: https://issues.apache.org/jira/browse/HDFS-10930 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, > HDFS-10930.03.patch, HDFS-10930.04.patch > > > Datanode IO (Disk/Network) related operations and instrumentations are > currently spilled in many classes such as DataNode.java, BlockReceiver.java, > BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, > DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, > LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. > This ticket is opened to consolidate IO related operations for easy > instrumentation, metrics collection, logging and trouble shooting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-6: - Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Summary: Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue (was: Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue) > Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue > -- > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656113#comment-15656113 ] Akira Ajisaka commented on HDFS-6: -- +1, checking this in. > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656044#comment-15656044 ] Hadoop QA commented on HDFS-11122: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838490/HDFS-11122.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 27a98012b379 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17525/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17525/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17525/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor >
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655995#comment-15655995 ] Hadoop QA commented on HDFS-11122: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHDFS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838487/HDFS-11122.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cc2699c50e3a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17524/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17524/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17524/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments:
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655954#comment-15655954 ] Hadoop QA commented on HDFS-11122: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestRenameWhileOpen | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838484/HDFS-11122.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3620f54df601 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17523/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17523/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17523/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655948#comment-15655948 ] Hadoop QA commented on HDFS-9: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 397 unchanged - 1 fixed = 397 total (was 398) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-9 | | GITHUB PR | https://github.com/apache/hadoop/pull/155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux d6443baa1e89 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17522/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17522/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17522/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL:
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.004.patch > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch, HDFS-11122.004.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655889#comment-15655889 ] Yiqun Lin commented on HDFS-11122: -- Thannks [~liuml07], you are right. Reupload v004 patch. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: (was: HDFS-11122.004.patch) > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655872#comment-15655872 ] Mingliang Liu commented on HDFS-11122: -- +1 Will commit this with a clean Jenkins run. trivial comments: we can totally skip the {{throwException}} and its assertion. All other exception will fail the test anyway (with ChecksumException expected and ignored). {code} try { IOUtils.copyBytes(fs.open(file), new IOUtils.NullOutputStream(), conf, true); fail("Should have failed to read the file with corrupted blocks."); } catch (ChecksumException ignored) { // expected exception reading corrupt blocks } {code} > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch, HDFS-11122.004.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: (was: HDFS-11122.004.patch) > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch, HDFS-11122.004.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.004.patch > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch, HDFS-11122.004.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11128) CreateEditsLog throws NullPointerException
[ https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655832#comment-15655832 ] Hadoop QA commented on HDFS-11128: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestAppendSnapshotTruncate | | | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11128 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838468/HDFS-11128.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0dcb3f810d43 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17521/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17521/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17521/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > CreateEditsLog throws NullPointerException > -- > > Key: HDFS-11128 > URL: https://issues.apache.org/jira/browse/HDFS-11128 > Project: Hadoop HDFS > Issue Type: Bug
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.004.patch Thanks [~liuml07] for the review and comments. Attach a new patch to address the comments. Have adjust the interval as 1000 in the latest patch.Thanks for the review. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch, HDFS-11122.004.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11128) CreateEditsLog throws NullPointerException
[ https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655743#comment-15655743 ] Arpit Agarwal commented on HDFS-11128: -- +1 pending Jenkins. > CreateEditsLog throws NullPointerException > -- > > Key: HDFS-11128 > URL: https://issues.apache.org/jira/browse/HDFS-11128 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11128.000.patch > > > When trying to create edit logs through CreateEditsLog, the following > exception is encountered. > {quote} > Exception in thread "main" java.lang.NullPointerException > at java.io.File.(File.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343) > at > org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200) > at > org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {quote} > This happens as Mockito is unable to access package protected method > _NNStorage#getStorageDirectory_ > We need to change the access of this method to _public_ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655727#comment-15655727 ] Arpit Agarwal commented on HDFS-9668: - Hi [~jingcheng...@intel.com] I started taking a look at [the patch|https://issues.apache.org/jira/secure/attachment/12837702/HDFS-9668-23.patch]. A few early comments: # Can we separate the block-lock addition from the read-write lock? In the first patch we should focus on converting the exclusive lock to a read-write lock only. I can help split up the patch if you'd like. # DirectoryScanner.java:391 - We can just get the read lock here. This phase of the directory scanner makes no changes to the DataNode state. # FsDatasetImpl.java:1117 - This should get the write lock as it calls appendImpl which modifies the volumeMap. # FsDatasetImpl.java:1268 - This too calls appendImpl, so it should get the write lock. Still reviewing FsDatasetImpl further... Also you can probably hold off on the branch-2 patch until we finalize the trunk changes. > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, > HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, > HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at >
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655671#comment-15655671 ] Hadoop QA commented on HDFS-9: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 397 unchanged - 1 fixed = 397 total (was 398) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-9 | | GITHUB PR | https://github.com/apache/hadoop/pull/155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux ee45691883b5 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93eeb13 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17520/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17520/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17520/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 >
[jira] [Updated] (HDFS-11128) CreateEditsLog throws NullPointerException
[ https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11128: -- Status: Patch Available (was: Open) > CreateEditsLog throws NullPointerException > -- > > Key: HDFS-11128 > URL: https://issues.apache.org/jira/browse/HDFS-11128 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11128.000.patch > > > When trying to create edit logs through CreateEditsLog, the following > exception is encountered. > {quote} > Exception in thread "main" java.lang.NullPointerException > at java.io.File.(File.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343) > at > org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200) > at > org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {quote} > This happens as Mockito is unable to access package protected method > _NNStorage#getStorageDirectory_ > We need to change the access of this method to _public_ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11128) CreateEditsLog throws NullPointerException
[ https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11128: -- Attachment: HDFS-11128.000.patch > CreateEditsLog throws NullPointerException > -- > > Key: HDFS-11128 > URL: https://issues.apache.org/jira/browse/HDFS-11128 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11128.000.patch > > > When trying to create edit logs through CreateEditsLog, the following > exception is encountered. > {quote} > Exception in thread "main" java.lang.NullPointerException > at java.io.File.(File.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343) > at > org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200) > at > org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {quote} > This happens as Mockito is unable to access package protected method > _NNStorage#getStorageDirectory_ > We need to change the access of this method to _public_ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11128) CreateEditsLog throws NullPointerException
Hanisha Koneru created HDFS-11128: - Summary: CreateEditsLog throws NullPointerException Key: HDFS-11128 URL: https://issues.apache.org/jira/browse/HDFS-11128 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Reporter: Hanisha Koneru Assignee: Hanisha Koneru When trying to create edit logs through CreateEditsLog, the following exception is encountered. {quote} Exception in thread "main" java.lang.NullPointerException at java.io.File.(File.java:415) at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200) at org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153) {quote} This happens as Mockito is unable to access package protected method _NNStorage#getStorageDirectory_ We need to change the access of this method to _public_ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655560#comment-15655560 ] Hadoop QA commented on HDFS-10872: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 24s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 35s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 59s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 44s{color} | {color:orange} root: The patch generated 2 new + 862 unchanged - 0 fixed = 864 total (was 862) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 44s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838420/HDFS-10872.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 8c888dce6de0 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17517/artifact/patchprocess/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17517/artifact/patchprocess/patch-compile-root.txt | | javac |
[jira] [Commented] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them
[ https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655527#comment-15655527 ] Hadoop QA commented on HDFS-4025: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.fs.viewfs.TestViewFsHdfs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-4025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826701/HDFS-4025.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux d4d74aada462 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17519/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17519/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > QJM: Sychronize past log segments to JNs that missed them > - > > Key: HDFS-4025 >
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655496#comment-15655496 ] Hadoop QA commented on HDFS-11127: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 47s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11127 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838437/HDFS-11127-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux 536ac7ccd9d4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 7b761f1 | | Default Java | 1.8.0_101 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17518/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17518/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to
[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10872: - Target Version/s: 2.7.4 Hadoop Flags: Reviewed Thanks Erik, +1 on v7 patch pending a successful Jenkins run (I just re-triggerred). Will also wait for a day before committing so others can comment. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, > HDFS-10872.007.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655434#comment-15655434 ] Anu Engineer edited comment on HDFS-11127 at 11/10/16 10:51 PM: [~vagarychen] Sorry that was for patch v1. I will wait for v2 jenkins to complete. was (Author: anu): [~vagarychen] Tests failures does not seem to be related to this patch. I will commit this shortly. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655434#comment-15655434 ] Anu Engineer commented on HDFS-11127: - [~vagarychen] Tests failures does not seem to be related to this patch. I will commit this shortly. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655410#comment-15655410 ] Hadoop QA commented on HDFS-9: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 397 unchanged - 1 fixed = 397 total (was 398) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.datanode.checker.TestStorageLocationChecker | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.TestEncryptionZones | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-9 | | GITHUB PR | https://github.com/apache/hadoop/pull/155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux e592bb7696fb 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/17516/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17516/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17516/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17516/console |
[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-8307: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~aaperezl] Thank you for your contribution. I have committed this to branch-2.7. > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655326#comment-15655326 ] Hadoop QA commented on HDFS-11127: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11127 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838427/HDFS-11127-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux 3d8c55098232 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 7b761f1 | | Default Java | 1.8.0_101 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17515/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17515/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17515/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Status: Patch Available (was: In Progress) Rekick the tests. > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, > HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Status: In Progress (was: Patch Available) > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, > HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them
[ https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655302#comment-15655302 ] Jing Zhao commented on HDFS-4025: - Thanks for the patch, [~hanishakoneru]! The patch looks good to me in general. Please see comments below: # In JournalNodeSyncer#startSyncJournalsThread, the following sleep may be unnecessary: in most of the cases the journal is formatted before we start the sync thread. {code} try { // Wait for the JournalNodes to get formatted before attempting sync Thread.sleep(SYNC_JOURNALS_TIMEOUT/2); } catch (InterruptedException e) { LOG.error(e); } {code} # The syncJournalThread should be daemon. Also we can add a flag to control when the thread should exit the while loop. # {{getAllJournalNodeAddrs}} shares the same functionality with {{QuorumJournalManager#getLoggerAddresses}}. We can convert it into a utility function and use it in these two places. # Since currently we do not support changing Journal Node configuration while JN is running, we can initialize all the other JN proxies in the very beginning. Then later we can randomly pick a proxy instead of an InetSocketAddress. # We usually only deploy 3 or 5 JNs in practice, thus we may also choose a Round-Robin way to pick sync target. Also if an error/exception happens during the sync, we can wait till the next run (instead of retrying another JN immediately). # Typo: getMisingLogList --> getMissingLogList # {{getMisingLogList}} can use merge-sort style to compare the two lists. # Let's see if we can avoid copying code from {{TransferFsImage}} but reuse its methods. # We need to make sure we finally purge old tmp editlog files due to failures during the downloading/renaming. > QJM: Sychronize past log segments to JNs that missed them > - > > Key: HDFS-4025 > URL: https://issues.apache.org/jira/browse/HDFS-4025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha >Affects Versions: QuorumJournalManager (HDFS-3077) >Reporter: Todd Lipcon >Assignee: Hanisha Koneru > Fix For: QuorumJournalManager (HDFS-3077) > > Attachments: HDFS-4025.000.patch, HDFS-4025.001.patch, > HDFS-4025.002.patch, HDFS-4025.003.patch > > > Currently, if a JournalManager crashes and misses some segment of logs, and > then comes back, it will be re-added as a valid part of the quorum on the > next log roll. However, it will not have a complete history of log segments > (i.e any individual JN may have gaps in its transaction history). This > mirrors the behavior of the NameNode when there are multiple local > directories specified. > However, it would be better if a background thread noticed these gaps and > "filled them in" by grabbing the segments from other JournalNodes. This > increases the resilience of the system when JournalNodes get reformatted or > otherwise lose their local disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655292#comment-15655292 ] Andres Perez edited comment on HDFS-8307 at 11/10/16 10:02 PM: --- Hi [~anu], Sorry for the wait I was setting up an HA cluster with a trunk build, and the problem does not appear to be there. The rework and new logic must have solved this in 2.8 was (Author: aaperezl): Hi [~anu], Sorry for the wait I was setting up an HA cluster with a trunk build, and the problem does not appear to be there. The rework and new logic must have removed this in 2.8 > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655292#comment-15655292 ] Andres Perez commented on HDFS-8307: Hi [~anu], Sorry for the wait I was setting up an HA cluster with a trunk build, and the problem does not appear to be there. The rework and new logic must have removed this in 2.8 > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11094) TestLargeBlockReport fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655287#comment-15655287 ] Hadoop QA commented on HDFS-11094: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project: The patch generated 13 new + 256 unchanged - 7 fixed = 269 total (was 263) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.client.TestEpochsAreUnique | | | hadoop.hdfs.qjournal.TestNNWithQJM | | | hadoop.hdfs.TestSafeModeWithStripedFile | | | hadoop.hdfs.qjournal.TestSecureNNWithQJM | | | hadoop.hdfs.qjournal.client.TestQuorumJournalManager | | | hadoop.hdfs.TestRollingUpgradeDowngrade | | | hadoop.hdfs.qjournal.server.TestJournalNode | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM | | | hadoop.hdfs.qjournal.client.TestQJMWithFaults | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail | | | hadoop.hdfs.TestRollingUpgradeRollback | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestDFSInotifyEventInputStream |
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655269#comment-15655269 ] Anu Engineer commented on HDFS-11127: - +1, Pending Jenkins. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11127: -- Attachment: HDFS-11127-HDFS-7240.002.patch Uploaded v002 patch to address [~anu]'s comments. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch, > HDFS-11127-HDFS-7240.002.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655243#comment-15655243 ] Hadoop QA commented on HDFS-10872: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 58s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 21s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 6m 14s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 14s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 49s{color} | {color:orange} root: The patch generated 2 new + 862 unchanged - 0 fixed = 864 total (was 862) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 2s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestMissingBlocksAlert | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838420/HDFS-10872.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux cfea7118d697 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17513/artifact/patchprocess/branch-compile-root.txt | | findbugs | v3.0.0 | | compile |
[jira] [Comment Edited] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655203#comment-15655203 ] Anu Engineer edited comment on HDFS-11127 at 11/10/16 9:31 PM: --- When do a list volume operation I would presume that we would like to see the following information. Username, VolumeName, VolumeSize, BlockSize and ACLs. So you can either make {{usage}} an optional field in the {{VolumeInfoProto}} thus avoid call into SCM for listing operations or add these extra fields to {{ListVolumeEntryProto}}. Your call on how you want it done. From code or complexity perspective they look same to me. bq. if this field is missing (user not specify block size), CBlock server will use 4KB. When you add a default option in the protoc file, you are making that explicit. Otherwise that assumption is hidden inside the cBlock Server code. But as I said it is not something that I feel strongly about so feel free to implement it anyway you think is good. was (Author: anu): When do a list volume operation I would presume that we would like to see the following information. Username, VolumeName, VolumeSize, BlockSize and ACLs. So you can either make {{usage}} an optional field in the {{VolumeInfoProto}} thus avoid call into SCM for listing operations or add these extra fields to {{ListVolumeEntryProto}}. Your call on how you want it done. From code or complxity perspective they look same to me. bq. if this field is missing (user not specify block size), CBlock server will use 4KB. When you add a default option in the protoc file, you are making that explicit. Otherwise that assumption is hidden inside the cBlock Server code. But as I said it is not something that I feel strongly about so feel free to implement it anyway you think is good. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655203#comment-15655203 ] Anu Engineer commented on HDFS-11127: - When do a list volume operation I would presume that we would like to see the following information. Username, VolumeName, VolumeSize, BlockSize and ACLs. So you can either make {{usage}} an optional field in the {{VolumeInfoProto}} thus avoid call into SCM for listing operations or add these extra fields to {{ListVolumeEntryProto}}. Your call on how you want it done. From code or complxity perspective they look same to me. bq. if this field is missing (user not specify block size), CBlock server will use 4KB. When you add a default option in the protoc file, you are making that explicit. Otherwise that assumption is hidden inside the cBlock Server code. But as I said it is not something that I feel strongly about so feel free to implement it anyway you think is good. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655180#comment-15655180 ] Chen Liang commented on HDFS-11127: --- 1. The volumeSize field of {{VolumeInfoProto}} is exactly what you mentioned: the declared volume size. But {{VolumeInfoProto}} also has a {{usage}} field which is left for actual used size. 2. Actually that's what I'm trying to do here. Make it optional in {{optional uint32 blockSize = 4;}}, such that if user has a block size specified, CBlock server will use it, if this field is missing (user not specify block size), CBlock server will use 4KB. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655163#comment-15655163 ] Anu Engineer commented on HDFS-11127: - bq. One thing with VolumeInfoProto is that it contains volume usage, and to get this the server will need to request this information from underlying storage layer (SCM in this case), which takes time. Another thing is that a listing request may potentially return a very huge number of volumes. So I want restrict this to a minimum set of information to avoid response message being too huge. That is interesting, I was imagining that this was reporting the declared size that we got from the {{CreateVolumeRequestProto}}. But you are saying that it is going to report the actual used size on the disk. But then how do I discover the declared size of my disk ? I was imagining that the list operation or VolumeInfo would have the size of my volume. bq. That's a good point. 4KB block would have much better performance in our case compared to 1K or 2K (because of less number of blocks) if you make it required with 4 KB as default, if user specifies nothing server will be able to read 4KB as value and if user want to specify something like 8KB as the block size they are still free to do so. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655139#comment-15655139 ] Chen Liang commented on HDFS-11127: --- Thanks [~anu] for the comments! Here are my thoughts 1. One thing with {{VolumeInfoProto}} is that it contains volume usage, and to get this the server will need to request this information from underlying storage layer (SCM in this case), which takes time. Another thing is that a listing request may potentially return a very huge number of volumes. So I want restrict this to a minimum set of information to avoid response message being too huge. 2. That's a good point. 4KB block would have much better performance in our case compared to 1K or 2K (because of less number of blocks), so we may indeed end up using 4KB most (if not all) of the time. But I do want to leave the possibility for a different block size for the time being. Just in case there are use cases where a smaller block size is more desired. Will flag this and leave it for further consideration at the moment (or comments from anyone else with a strong preference). > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655129#comment-15655129 ] Arpit Agarwal commented on HDFS-9: -- Updated the pull request with 1 more commit for checkstyle fixes. https://github.com/apache/hadoop/pull/155/commits/8b0e911a387fe63a9ae021156220f5c720b3f2db > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL: https://issues.apache.org/jira/browse/HDFS-9 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > The {{AsyncChecker}} support introduced by HDFS-4 can be used to > parallelize checking {{StorageLocation}} s on Datanode startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655097#comment-15655097 ] Anu Engineer edited comment on HDFS-11127 at 11/10/16 8:53 PM: --- [~vagarychen] Thanks for posting this patch. Some minor comments. # I am wondering if we need {{ListVolumeResponseProto}} eventually we will return almost all the info in {{VolumeInfoProto}}, should we just return a list of {{VolumeInfoProto}} in list operation. # Just something to think about, {{optional uint32 blockSize = 4;}} should we make this required with default to 4 KB ? I don't have a strong opinion on this, just flagging for your consideration. # Also in {{VolumeInfoProto}} we are treating block size as {{required *uint64* blockSize = 4;}} an unsigned 64 bit. But {{CreateVolumeRequestProto}} treats it as uint32. Should we change {{VolumeInfoProto}} also to uint32 ? was (Author: anu): [~vagarychen] Thanks for posting this patch. Some minor comments. # I am wondering if we need {{ListVolumeResponseProto}} eventually we will return almost all the info in {{VolumeInfoProto}}, should we just return a list of {{VolumeInfoProto}} in list operation. # Just something to think about, {{optional uint32 blockSize = 4;}} should we make this required with default to 4 KB ? I don't have a strong opinion on this, just flagging for your consideration. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655097#comment-15655097 ] Anu Engineer edited comment on HDFS-11127 at 11/10/16 8:45 PM: --- [~vagarychen] Thanks for posting this patch. Some minor comments. # I am wondering if we need {{ListVolumeResponseProto}} eventually we will return almost all the info in {{VolumeInfoProto}}, should we just return a list of {{VolumeInfoProto}} in list operation. # Just something to think about, {{optional uint32 blockSize = 4;}} should we make this required with default to 4 KB ? I don't have a strong opinion on this, just flagging for your consideration. was (Author: anu): [~vagarychen] Thanks for posting this patch. Some minor comments. # I am wondering if we need {{ListVolumeResponseProto}} evetually we will return almost all the info in {{VolumeInfoProto}}, should we just return a list of {{VolumeInfoProto}} in list operation. # Just something to think about, {{optional uint32 blockSize = 4;}} should we make this required with default to 4 KB ? I don't have a strong opinion on this, just flagging for your consideration. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655097#comment-15655097 ] Anu Engineer commented on HDFS-11127: - [~vagarychen] Thanks for posting this patch. Some minor comments. # I am wondering if we need {{ListVolumeResponseProto}} evetually we will return almost all the info in {{VolumeInfoProto}}, should we just return a list of {{VolumeInfoProto}} in list operation. # Just something to think about, {{optional uint32 blockSize = 4;}} should we make this required with default to 4 KB ? I don't have a strong opinion on this, just flagging for your consideration. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11127: -- Attachment: HDFS-11127-HDFS-7240.001.patch > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11127: -- Status: Patch Available (was: Open) > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11127-HDFS-7240.001.patch > > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655075#comment-15655075 ] Anu Engineer commented on HDFS-11126: - Findbug warnings are all in generated code, Ignoring them. Test failure is not related to this patch. > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11126-HDFS-7240.001.patch > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol
[ https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11127: -- Description: This JIRA adds block service protocol. This protocol is expose to client for volume operations including create, delete, info and list. Note that this protocol has nothing to do with actual data read/write on a particular volume. (Also note that the term "cblock" is the current term used to refer to the block storage system.) was:This JIRA adds block service protocol. This protocol is expose to client for volume operations including create, delete, info and list. Note that this protocol has nothing to do with actual data read/write on a particular volume. > Block Storage : add block storage service protocol > -- > > Key: HDFS-11127 > URL: https://issues.apache.org/jira/browse/HDFS-11127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > > This JIRA adds block service protocol. This protocol is expose to client for > volume operations including create, delete, info and list. Note that this > protocol has nothing to do with actual data read/write on a particular volume. > (Also note that the term "cblock" is the current term used to refer to the > block storage system.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11127) Block Storage : add block storage service protocol
Chen Liang created HDFS-11127: - Summary: Block Storage : add block storage service protocol Key: HDFS-11127 URL: https://issues.apache.org/jira/browse/HDFS-11127 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Assignee: Chen Liang This JIRA adds block service protocol. This protocol is expose to client for volume operations including create, delete, info and list. Note that this protocol has nothing to do with actual data read/write on a particular volume. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655069#comment-15655069 ] Hadoop QA commented on HDFS-11126: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 77 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 9 new + 77 unchanged - 0 fixed = 86 total (was 77) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}112m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos$GetSmallFileRequestProto.PARSER isn't final but should be At ContainerProtos.java:be At ContainerProtos.java:[line 34080] | | | Class org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos$GetSmallFileRequestProto defines non-transient non-serializable instance field unknownFields In ContainerProtos.java:instance field unknownFields In ContainerProtos.java | | | org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos$GetSmallFileResponseProto.PARSER isn't final but should be At ContainerProtos.java:be At ContainerProtos.java:[line 34641] | | | Class org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos$GetSmallFileResponseProto defines non-transient
[jira] [Updated] (HDFS-11094) TestLargeBlockReport fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-11094: --- Status: Patch Available (was: Open) > TestLargeBlockReport fails intermittently > - > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, > HDFS-11094.003.patch > > > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11094) TestLargeBlockReport fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-11094: --- Attachment: HDFS-11094.003.patch Rebased to trunk. Resubmitting patch > TestLargeBlockReport fails intermittently > - > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, > HDFS-11094.003.patch > > > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15655024#comment-15655024 ] Hadoop QA commented on HDFS-11100: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 8m 3s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 6m 26s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 26s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11100 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838407/HDFS-11100.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fd5422ca09c8 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17510/artifact/patchprocess/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17510/artifact/patchprocess/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/17510/artifact/patchprocess/patch-compile-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17510/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Updated] (HDFS-11094) TestLargeBlockReport fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-11094: --- Status: Open (was: Patch Available) Oops, accidentally uploaded a patch against 2.8, will rebase and upload again. > TestLargeBlockReport fails intermittently > - > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch > > > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11094) TestLargeBlockReport fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-11094: --- Attachment: HDFS-11094.002.patch The tests that failed are related to this patch. They depend on the datanode *not* being able to heartbeat correctly, so the test will timeout when this fix is put in. Additionally, upon further review it looks like the cluster is actually waiting for the heartbeat to be received. The race is between the test checking for the active NN and the HAState getting set after a heartbeat response is received by the data node. After talking with [~daryn], I think that allowing the NN to send back its HA state during registration is the best way to fix this. This way the datanodes will always know the HAState of the NN that they are connected to. I'm attaching a patch that adds the HAServiceState of the NN to the NamespaceInfo that gets sent from the NN to the DN during a versionRequest. This is one of the first things that happens during a DN-NN connection setup and happens even before registration of the DN. [~daryn], [~liuml07], can you please review? Thanks > TestLargeBlockReport fails intermittently > - > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch > > > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654931#comment-15654931 ] Erik Krogen commented on HDFS-10872: [~zhz], thanks for the review. (1) Yes, you are right, that is unnecessary. (2) Fixed. I'm surprised my IDE didn't warn me of this. (3) Agreed that this could be a good follow-up, there is a lot of duplicated logic. Could be good to do something to consolidate with HADOOP-13702 as well. Attaching v007 patch. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-10872: --- Attachment: HDFS-10872.007.patch > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, > HDFS-10872.007.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654905#comment-15654905 ] Hadoop QA commented on HDFS-10872: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 15s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 7s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 39s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 39s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 45s{color} | {color:orange} root: The patch generated 1 new + 862 unchanged - 0 fixed = 863 total (was 862) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838401/HDFS-10872.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 4cd2b8c865a8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 89354f0 | | Default Java | 1.8.0_101 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17509/artifact/patchprocess/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/17509/artifact/patchprocess/patch-compile-root.txt | | javac |
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654899#comment-15654899 ] Zhe Zhang commented on HDFS-10872: -- Thanks Erik for the patch! LGTM overall. I think the added {{newMutableRatesWithAggregation}} constructor is reasonable. Some minors: # Below change doesn't seem necessary {code} - private final MetricsRegistry registry = new MetricsRegistry("FSNamesystem"); + private final MetricsRegistry registry = + new MetricsRegistry("FSNamesystem").setContext("dfs"); {code} # We could have the @link have its own line, right now the link is broken: {code} * If metrics are enabled by setting {@link org.apache.hadoop.hdfs.DFSConfigKeys * #DFS_NAMENODE_LOCK_DETAILED_METRICS_KEY} to be true, metrics will be emitted {code} # Follow-on: maybe we can consider consolidating the code in readUnlock and writeUnlock [~andrew.wang] Any further comments? Thanks. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654855#comment-15654855 ] Hadoop QA commented on HDFS-11056: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 44s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 130 unchanged - 2 fixed = 131 total (was 132) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2630 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 53s{color} | {color:red} The patch 139 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 35s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_111 Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDNFencing | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | JDK v1.7.0_111 Failed junit tests | hadoop.hdfs.TestDatanodeRegistration | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.web.TestHttpsFileSystem | | |
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654836#comment-15654836 ] Mingliang Liu commented on HDFS-11122: -- Triggering block report should help here obviously. +1 {code} 540 } catch (IOException ie) { 541 assertTrue(ie instanceof ChecksumException); 542 } {code} I'd prefer catch the ChecksumException directly and name it ignored. This is minor coding style preference difference though. The waitFor is now using 3000 interval, which is too long? 3000 (3 seconds) seems unnecessary if we can return fast. How about 500/1000? > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11122: - Affects Version/s: (was: 3.0.0-alpha1) Target Version/s: 2.8.0, 3.0.0-alpha2 > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11122: - Component/s: test > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654760#comment-15654760 ] Hadoop QA commented on HDFS-11126: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDFS-11126 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11126 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838412/HDFS-11126-HDSF-7240.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17511/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11126-HDSF-7240.001.patch > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11126: Status: Patch Available (was: Open) > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11126-HDSF-7240.001.patch > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Attachment: HDFS-11100.002.patch Patch 002: * Optimization: do not check children for sticky bit protection if the parent does not have the bit set > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, > HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11126: Attachment: HDFS-11126-HDFS-7240.001.patch Fixed the typo in the patch name. Removed the old patch and reattaching the new patch. > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11126-HDFS-7240.001.patch > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11126: Attachment: (was: HDFS-11126-HDSF-7240.001.patch) > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654768#comment-15654768 ] Hadoop QA commented on HDFS-11100: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m 20s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11100 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838384/HDFS-11100.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux df190ab3d1ce 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ca68f9c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17507/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17507/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL:
[jira] [Created] (HDFS-11126) Ozone: Add small file support RPC
Anu Engineer created HDFS-11126: --- Summary: Ozone: Add small file support RPC Key: HDFS-11126 URL: https://issues.apache.org/jira/browse/HDFS-11126 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Anu Engineer Assignee: Anu Engineer Fix For: HDFS-7240 Add an RPC to send both data and metadata together in one RPC. This is useful when we want to read and write small files, say less than 1 MB. This API is very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11126) Ozone: Add small file support RPC
[ https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11126: Attachment: HDFS-11126-HDSF-7240.001.patch > Ozone: Add small file support RPC > - > > Key: HDFS-11126 > URL: https://issues.apache.org/jira/browse/HDFS-11126 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11126-HDSF-7240.001.patch > > > Add an RPC to send both data and metadata together in one RPC. This is useful > when we want to read and write small files, say less than 1 MB. This API is > very useful for ozone and cBlocks (HDFS-8) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Status: Patch Available (was: In Progress) > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, > HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Status: In Progress (was: Patch Available) Would like to optimize the code a little. > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654589#comment-15654589 ] Erik Krogen edited comment on HDFS-10872 at 11/10/16 5:19 PM: -- Test failures don't seem related and pass fine locally. Checkstyle is complaining about an import only used within a Javadoc (marking as unused) - changed it slightly so that checkstyle won't complain. Attaching v006 patch. was (Author: xkrogen): Test failures don't seem related and pass fine locally. Checkstyle is complaining about an import only used within a Javadoc - changed it slightly so that checkstyle won't complain. Attaching v006 patch. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-10872: --- Attachment: HDFS-10872.006.patch Test failures don't seem related and pass fine locally. Checkstyle is complaining about an import only used within a Javadoc - changed it slightly so that checkstyle won't complain. Attaching v006 patch. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11056: --- Attachment: HDFS-11056.branch-2.7.patch Attach an updated branch-2.7 patch. > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.7.patch, HDFS-11056.branch-2.patch, > HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient - Found Checksum error for >
[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11056: --- Attachment: (was: HDFS-11056.branch-2.7.patch) > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient - Found Checksum error for > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from >
[jira] [Commented] (HDFS-9337) Validate required params for WebHDFS requests
[ https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654413#comment-15654413 ] Vinayakumar B commented on HDFS-9337: - Thank you [~jagadesh.kiran] for contribution of long pending issue. > Validate required params for WebHDFS requests > - > > Key: HDFS-9337 > URL: https://issues.apache.org/jira/browse/HDFS-9337 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jagadesh Kiran N >Assignee: Jagadesh Kiran N > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, > HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, > HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, > HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, > HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, > HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, > HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, > HDFS-9337_20.patch, HDFS-9337_21.patch > > > {code} > curl -i -X PUT > "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME; > {code} > Null point exception will be thrown > {code} > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-11100: -- Attachment: HDFS-11100.002.patch Patch 002: * Fix whitespace > Recursively deleting file protected by sticky bit should fail > - > > Key: HDFS-11100 > URL: https://issues.apache.org/jira/browse/HDFS-11100 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Labels: permissions > Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, hdfs_cmds > > > Recursively deleting a directory that contains files or directories protected > by sticky bit should fail but it doesn't in HDFS. In the case below, > {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive > deleting {{/tmp/test/sticky_dir}} should fail. > {noformat} > + hdfs dfs -ls -R /tmp/test > drwxrwxrwt - jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir > -rwxrwxrwx 1 jzhuge supergroup 0 2016-11-03 18:08 > /tmp/test/sticky_dir/f2 > + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2 > rm: Permission denied by sticky bit: user=hadoop, > path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, > parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt > + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir > Deleted /tmp/test/sticky_dir > {noformat} > Centos 6.4 behavior: > {noformat} > $ ls -lR /tmp/test > /tmp/test: > total 4 > drwxrwxrwt 2 systest systest 4096 Nov 3 18:36 sbit > /tmp/test/sbit: > total 0 > -rw-rw-rw- 1 systest systest 0 Nov 2 13:45 f2 > $ sudo -u mapred rm -fr /tmp/test/sbit > rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted > $ chmod -t /tmp/test/sbit > $ sudo -u mapred rm -fr /tmp/test/sbit > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654224#comment-15654224 ] Wei-Chiu Chuang commented on HDFS-11056: I committed the patch to trunk, branch-2 and branch-2.8, and I am still working on a branch-2.7 patch. > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.7.patch, HDFS-11056.branch-2.patch, > HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient
[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf
[ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654152#comment-15654152 ] Hadoop QA commented on HDFS-11026: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 775 unchanged - 3 fixed = 778 total (was 778) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker | | | hadoop.hdfs.server.namenode.ha.TestHAMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11026 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838356/HDFS-11026.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc xml | | uname | Linux a7193253d3d4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ca68f9c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654058#comment-15654058 ] Hadoop QA commented on HDFS-10996: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new + 987 unchanged - 17 fixed = 1004 total (was 1004) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestViewFsHdfs | | | hadoop.hdfs.TestDFSClientRetries | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10996 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838348/HDFS-10996-v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 3175d4644033 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ca68f9c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17505/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit |
[jira] [Commented] (HDFS-9337) Validate required params for WebHDFS requests
[ https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654055#comment-15654055 ] Jagadesh Kiran N commented on HDFS-9337: Thanks [~vinayrpet] for commiting the patch > Validate required params for WebHDFS requests > - > > Key: HDFS-9337 > URL: https://issues.apache.org/jira/browse/HDFS-9337 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jagadesh Kiran N >Assignee: Jagadesh Kiran N > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, > HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, > HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, > HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, > HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, > HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, > HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, > HDFS-9337_20.patch, HDFS-9337_21.patch > > > {code} > curl -i -X PUT > "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME; > {code} > Null point exception will be thrown > {code} > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf
[ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11026: -- Attachment: HDFS-11026.003.patch Attaching HDFS-11026.003.patch which adds the default value for {{dfs.block.access.token.protobuf.enable}} to {{hdfs-defaults.xml}}. This fixes one of the test failures. The other looked spurious. > Convert BlockTokenIdentifier to use Protobuf > > > Key: HDFS-11026 > URL: https://issues.apache.org/jira/browse/HDFS-11026 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs, hdfs-client >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Ewan Higgs > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-11026.002.patch, HDFS-11026.003.patch, > blocktokenidentifier-protobuf.patch > > > {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} > (basically a {{byte[]}}) and manual serialization to get data into and out of > the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. > {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The > {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded > more easily and will be consistent with the rest of the system. > NB: Release of this will require a version update since 2.8.x won't be able > to decipher {{BlockKeyProto.keyBytes}} from 2.8.y. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653932#comment-15653932 ] Hadoop QA commented on HDFS-11124: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11124 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838342/HDFS-11124.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8910da6aa835 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 86ac1ad | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17504/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17504/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17504/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma
[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
[ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653883#comment-15653883 ] Hadoop QA commented on HDFS-11100: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 23s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 54m 30s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11100 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838332/HDFS-11100.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 423e42f128cb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 86ac1ad | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/17502/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17502/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17502/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Updated] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-10996: - Attachment: HDFS-10996-v3.patch Fix the failed test cases > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, > HDFS-10996-v3.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11117) Refactor striped file unit test case structure
[ https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653842#comment-15653842 ] Hadoop QA commented on HDFS-7: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 34 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 65 new + 680 unchanged - 65 fixed = 745 total (was 745) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}112m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838329/HDFS-7-v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5ff0de41d009 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 86ac1ad | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17500/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17500/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17500/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-9337) Validate required params for WebHDFS requests
[ https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653841#comment-15653841 ] Hudson commented on HDFS-9337: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10812 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10812/]) HDFS-9337. Validate required params for WebHDFS requests (Contributed by (vinayakumarb: rev ca68f9cb5bc78e996c0daf8024cf0e7a4faef12a) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java > Validate required params for WebHDFS requests > - > > Key: HDFS-9337 > URL: https://issues.apache.org/jira/browse/HDFS-9337 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jagadesh Kiran N >Assignee: Jagadesh Kiran N > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, > HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, > HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, > HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, > HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, > HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, > HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, > HDFS-9337_20.patch, HDFS-9337_21.patch > > > {code} > curl -i -X PUT > "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME; > {code} > Null point exception will be thrown > {code} > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9337) Validate required params for WebHDFS requests
[ https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-9337: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 Release Note: Strict validations will be done for mandatory parameters for WebHDFS REST requests. Status: Resolved (was: Patch Available) Committed to trunk. > Validate required params for WebHDFS requests > - > > Key: HDFS-9337 > URL: https://issues.apache.org/jira/browse/HDFS-9337 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jagadesh Kiran N >Assignee: Jagadesh Kiran N > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, > HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, > HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, > HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, > HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, > HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, > HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, > HDFS-9337_20.patch, HDFS-9337_21.patch > > > {code} > curl -i -X PUT > "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME; > {code} > Null point exception will be thrown > {code} > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org