[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration
[ https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099536#comment-16099536 ] Konstantin Shvachko commented on HDFS-11896: +1 on the 007 patch. > Non-dfsUsed will be doubled on dead node re-registration > > > Key: HDFS-11896 > URL: https://issues.apache.org/jira/browse/HDFS-11896 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Labels: release-blocker > Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, > HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, > HDFS-11896-007.patch, HDFS-11896-branch-2.7-001.patch, > HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, > HDFS-11896-branch-2.7-004.patch, HDFS-11896.patch > > > *Scenario:* > i)Make you sure you've non-dfs data. > ii) Stop Datanode > iii) wait it becomes dead > iv) now restart and check the non-dfs data -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12170) Ozone: OzoneFileSystem: KSM should maintain key creation time and modification time
[ https://issues.apache.org/jira/browse/HDFS-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12170: - Attachment: HDFS-12170-HDFS-7240.002.patch Attach the same patch to retrigger the Jenkins. > Ozone: OzoneFileSystem: KSM should maintain key creation time and > modification time > --- > > Key: HDFS-12170 > URL: https://issues.apache.org/jira/browse/HDFS-12170 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12170-HDFS-7240.001.patch, > HDFS-12170-HDFS-7240.002.patch > > > OzoneFileSystem will need modification time for files and directories created > in ozone file system. > KSM should maintain key creation time and modification time for the > individual key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE
[ https://issues.apache.org/jira/browse/HDFS-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12112: - Fix Version/s: (was: 2.8.3) 2.8.2 > TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE > --- > > Key: HDFS-12112 > URL: https://issues.apache.org/jira/browse/HDFS-12112 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 > Environment: CDH5.12.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HDFS-12112.001.patch > > > Found the following error: > {quote} > java.lang.NullPointerException: null > at > org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202) > {quote} > The NPE suggests corruptStorageDataNode in the following code snippet could > be null. > {code} > for(int i=0; i{code} > Looking at the code, the test does not wait for file replication to happen, > which is why corruptStorageDataNode (the DN of the second replica) is null. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12177) NameNode exits due to setting BlockPlacementPolicy loglevel to Debug
[ https://issues.apache.org/jira/browse/HDFS-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12177: - Fix Version/s: (was: 2.8.3) > NameNode exits due to setting BlockPlacementPolicy loglevel to Debug > - > > Key: HDFS-12177 > URL: https://issues.apache.org/jira/browse/HDFS-12177 > Project: Hadoop HDFS > Issue Type: Bug > Components: block placement >Affects Versions: 2.8.1 >Reporter: Jiandan Yang >Assignee: Jiandan Yang > Fix For: 2.7.4, 2.8.2 > > Attachments: HDFS-12177-001-branch-2.7.patch, > HDFS-12177-001-branch-2.8.patch, HDFS-12177-branch-2.7-001-.patch, > HDFS-12177-branch-2.8-001.patch, HDFS_9668_1.patch > > > NameNode exits because the ReplicationMonitor thread internally throws NPE. > The reason for throwing NPE is that the builder field is not initialized whe > do log. > Solution: before appending it should determine whether the builder is null > {code:java} > if (LOG.isDebugEnabled()) { > builder = debugLoggingBuilder.get(); > builder.setLength(0); > builder.append("["); > } > some other codes ... > if (LOG.isDebugEnabled()) { > builder.append("\nNode ").append(NodeBase.getPath(chosenNode)) > .append(" ["); > } > some other codes ... > if (LOG.isDebugEnabled()) { > builder.append("\n]"); > } > {code} > NN exception log is : > {code:java} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:722) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:689) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseFromNextRack(BlockPlacementPolicyDefault.java:640) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:608) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:483) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:390) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:419) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:266) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:119) > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3768) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3720) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12137) DN dataset lock should be fair
[ https://issues.apache.org/jira/browse/HDFS-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12137: - Fix Version/s: (was: 2.8.3) 2.8.2 > DN dataset lock should be fair > -- > > Key: HDFS-12137 > URL: https://issues.apache.org/jira/browse/HDFS-12137 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HDFS-12137.branch-2.patch, HDFS-12137.trunk.patch, > HDFS-12137.trunk.patch > > > The dataset lock is very highly contended. The unfair nature can be > especially harmful to the heartbeat handling. Under high loads, partially > expose by HDFS-12136 introducing disk i/o within the lock, the heartbeat > handling thread may process commands so slowly due to the contention that the > node becomes stale or falsely declared dead. The unfair lock is not helping > and appears to be causing frequent starvation under load. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11742) Improve balancer usability after HDFS-8818
[ https://issues.apache.org/jira/browse/HDFS-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11742: - Fix Version/s: (was: 2.8.3) 2.8.2 > Improve balancer usability after HDFS-8818 > -- > > Key: HDFS-11742 > URL: https://issues.apache.org/jira/browse/HDFS-11742 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Blocker > Fix For: 2.9.0, 2.7.4, 3.0.0-beta1, 2.8.2 > > Attachments: balancer2.8.png, balancer_fix.png, > HDFS-11742.branch-2.8.patch, HDFS-11742.branch-2.patch, > HDFS-11742.trunk.patch, HDFS-11742.v2.trunk.patch, replaceBlockNumOps-8w.jpg > > > We ran 2.8 balancer with HDFS-8818 on a 280-node and a 2,400-node cluster. In > both cases, it would hang forever after two iterations. The two iterations > were also moving things at a significantly lower rate. The hang itself is > fixed by HDFS-11377, but the design limitation remains, so the balancer > throughput ends up actually lower. > Instead of reverting HDFS-8188 as originally suggested, I am making a small > change to make it less error prone and more usable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12140) Remove BPOfferService lock contention to get block pool id
[ https://issues.apache.org/jira/browse/HDFS-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12140: - Fix Version/s: (was: 2.8.3) 2.8.2 > Remove BPOfferService lock contention to get block pool id > -- > > Key: HDFS-12140 > URL: https://issues.apache.org/jira/browse/HDFS-12140 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HDFS-12140.branch-2.8.patch, HDFS-12140.trunk.patch > > > The block pool id is protected by a lock in {{BPOfferService}}. This creates > excessive contention especially for xceivers threads attempting to queue IBRs > and heartbeat processing. When the latter is delayed due to excessive > FSDataset lock contention, it causes pipelines to collapse. > Accessing the block pool id should be lockless after registration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8312) Trash does not descent into child directories to check for permissions
[ https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-8312: Fix Version/s: (was: 2.8.3) > Trash does not descent into child directories to check for permissions > -- > > Key: HDFS-8312 > URL: https://issues.apache.org/jira/browse/HDFS-8312 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs, security >Affects Versions: 2.2.0, 2.6.0, 2.7.2 >Reporter: Eric Yang >Assignee: Weiwei Yang >Priority: Critical > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.2 > > Attachments: HDFS-8312-001.patch, HDFS-8312-002.patch, > HDFS-8312-003.patch, HDFS-8312-004.patch, HDFS-8312-005.patch, > HDFS-8312-branch-2.7.patch, HDFS-8312-branch-2.8.01.patch, > HDFS-8312-branch-2.8.1.001.patch, HDFS-8312-testcase.patch > > > HDFS trash does not descent into child directory to check if user has > permission to delete files. For example: > Run the following command to initialize directory structure as super user: > {code} > hadoop fs -mkdir /BSS/level1 > hadoop fs -mkdir /BSS/level1/level2 > hadoop fs -mkdir /BSS/level1/level2/level3 > hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt > hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt > hadoop fs -chown -R user1:users /BSS/level1 > hadoop fs -chown -R 750 /BSS/level1 > hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt > hadoop fs -chmod 775 /BSS > {code} > Change to a normal user called user2. > When trash is enabled: > {code} > sudo su user2 - > hadoop fs -rm -r /BSS/level1 > 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: > Deletion interval = 3600 minutes, Emptier interval = 0 minutes. > Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: > hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current > {code} > When trash is disabled: > {code} > /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r > /BSS/level1 > 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: > Deletion interval = 0 minutes, Emptier interval = 0 minutes. > rm: Permission denied: user=user2, access=ALL, > inode="/BSS/level1":user1:users:drwxr-x--- > {code} > There is inconsistency between trash behavior and delete behavior. When > trash is enabled, files owned by user1 is deleted by user2. It looks like > trash does not recursively validate if the child directory files can be > removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure
[ https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11472: - Fix Version/s: (was: 2.8.3) 2.8.2 > Fix inconsistent replica size after a data pipeline failure > --- > > Key: HDFS-11472 > URL: https://issues.apache.org/jira/browse/HDFS-11472 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Erik Krogen >Priority: Critical > Fix For: 2.9.0, 2.7.4, 3.0.0-beta1, 2.8.2 > > Attachments: HDFS-11472.001.patch, HDFS-11472.002.patch, > HDFS-11472.003.patch, HDFS-11472.004.patch, HDFS-11472.005.patch, > HDFS-11472-branch-2.005.patch, HDFS-11472-branch-2.7.005.patch, > HDFS-11472-branch-2.8.005.patch, HDFS-11472.testcase.patch > > > We observed a case where a replica's on disk length is less than acknowledged > length, breaking the assumption in recovery code. > {noformat} > 2017-01-08 01:41:03,532 WARN > org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to > obtain replica info for block > (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from > datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null]) > java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < > getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW > getNumBytes() = 27530 > getBytesOnDisk() = 27006 > getVisibleLength()= 27268 > getVolume() = /data/6/hdfs/datanode/current > getBlockFile()= > /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952 > bytesAcked=27268 > bytesOnDisk=27006 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245) > at > org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It turns out that if an exception is thrown within > {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not > be updated, but the data is written to disk anyway. > For example, here's one exception we observed > {noformat} > 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Exception for > BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067 > java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244) > at java.lang.Thread.run(Thread.java:745) > {noformat} > There are potentially other places and causes where an exception is thrown > within {{BlockReceiver#receivePacket}}, so it may not make much sense to > alleviate it for this particular exception. Instead, we should improve > replica recovery code to handle the case where ondisk size is less than > acknowledged size, and update in-memory checksum accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099469#comment-16099469 ] Hudson commented on HDFS-12193: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12051 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12051/]) HDFS-12193. Fix style issues in HttpFS tests. Contributed by Zoran (raviprak: rev c98201b5d83a700b4d08165c6fd1a6ef2eed) * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoACLs.java > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11920) Ozone : add key partition
[ https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099438#comment-16099438 ] Weiwei Yang commented on HDFS-11920: Hi [~vagarychen] Thanks for the patch, it looks good to me overall. I have few comments please let me know if that makes sense to you, 1. *DistributedStorageHandler* line 410: I am wondering why it is building the containerKey to "/volume/bucket/blockID", why not use simply {{BlockID}} here? This seems to be the key that written to container.db in container metadata. 2. *ChunkOutputStream* I am thinking if we really need to let it know about an ozone object key, see line 56. Right now it writes a chunk file like {{ozoneKeyName_stream_streamId_chunk_n}}, why not {{blockId_stream_streamId_chunk_n}} instead? I think we can remove this variable from this class. line 168: it writes {{b}} length to the outputstream but the position only moves 1, seems incorrect. 3. *TestMultipleContainerReadWrite* In {{TestWriteRead}}, can we check the number of chunk files for the key actually matches the desired number of split? 4. Looks like chunk group input or output stream maintains a list of streams and r/w in liner manner, can we optimize this to do parallel r/w as they are independent chunks. That says to have a thread fetch a certain length of content from a chunk, then merge them together afterwards. It doesn't have to be done in this patch, but I think that might be a good improvement. Thanks > Ozone : add key partition > - > > Key: HDFS-11920 > URL: https://issues.apache.org/jira/browse/HDFS-11920 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11920-HDFS-7240.001.patch, > HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, > HDFS-11920-HDFS-7240.004.patch > > > Currently, each key corresponds to one single SCM block, and putKey/getKey > writes/reads to this single SCM block. This works fine for keys with > reasonably small data size. However if the data is too huge, (e.g. not even > fits into a single container), then we need to be able to partition the key > data into multiple blocks, each in one container. This JIRA changes the > key-related classes to support this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12170) Ozone: OzoneFileSystem: KSM should maintain key creation time and modification time
[ https://issues.apache.org/jira/browse/HDFS-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099433#comment-16099433 ] Yiqun Lin commented on HDFS-12170: -- Thanks [~msingh] for the work on this! The patch also looks good to me. One comment: seems the modify time is absolutely same with the created time now. Is there any case that we just update the modify time in future under OzoneFileSystem? Just curious for this. Hi [~vagarychen], bq. Is there a particular reason for this? can we just use one type? Here some other places used {{String}} type, I think the reason is that the field {{createdOn}}, {{modifiedOn}} is the date string rather than a long type number value. > Ozone: OzoneFileSystem: KSM should maintain key creation time and > modification time > --- > > Key: HDFS-12170 > URL: https://issues.apache.org/jira/browse/HDFS-12170 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12170-HDFS-7240.001.patch > > > OzoneFileSystem will need modification time for files and directories created > in ozone file system. > KSM should maintain key creation time and modification time for the > individual key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11984) Ozone: Ensures listKey lists all required key fields
[ https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang reassigned HDFS-11984: -- Assignee: Yiqun Lin (was: Weiwei Yang) > Ozone: Ensures listKey lists all required key fields > > > Key: HDFS-11984 > URL: https://issues.apache.org/jira/browse/HDFS-11984 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin > > HDFS-11782 implements the listKey operation which only lists the basic key > fields, we need to make sure it return all required fields > # version > # md5hash > # createdOn > # size > # keyName > this task is depending on the work of HDFS-11886. See more discussion [here | > https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields
[ https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099422#comment-16099422 ] Weiwei Yang commented on HDFS-11984: Hi [~linyiqun] Thanks for working on this. You are right, we don't need {{dataFileName}}, let me update the description. I listed this one depending on HDFS-11886 that was because I thought these info would be persisted only when we commit key (phase-2). However HDFS-12170 was implemented while writing a key (phase-1), it should be fine for now. We can keep HDFS-11886 open for further improvement on this. Meanwhile I will reassign this JIRA to you so you can work on this stuff end-to-end, thanks a lot for working on this, again. :). > Ozone: Ensures listKey lists all required key fields > > > Key: HDFS-11984 > URL: https://issues.apache.org/jira/browse/HDFS-11984 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > > HDFS-11782 implements the listKey operation which only lists the basic key > fields, we need to make sure it return all required fields > # version > # md5hash > # createdOn > # size > # keyName > # dataFileName > this task is depending on the work of HDFS-11886. See more discussion [here | > https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11984) Ozone: Ensures listKey lists all required key fields
[ https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11984: --- Description: HDFS-11782 implements the listKey operation which only lists the basic key fields, we need to make sure it return all required fields # version # md5hash # createdOn # size # keyName this task is depending on the work of HDFS-11886. See more discussion [here | https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. was: HDFS-11782 implements the listKey operation which only lists the basic key fields, we need to make sure it return all required fields # version # md5hash # createdOn # size # keyName # dataFileName this task is depending on the work of HDFS-11886. See more discussion [here | https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. > Ozone: Ensures listKey lists all required key fields > > > Key: HDFS-11984 > URL: https://issues.apache.org/jira/browse/HDFS-11984 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > > HDFS-11782 implements the listKey operation which only lists the basic key > fields, we need to make sure it return all required fields > # version > # md5hash > # createdOn > # size > # keyName > this task is depending on the work of HDFS-11886. See more discussion [here | > https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields
[ https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099410#comment-16099410 ] Yiqun Lin commented on HDFS-11984: -- Hi [~cheersyang], HDFS-12170 is working for adding the creation time in key info. I will file a new JIRA to implement the client-side work once that being merged. That will be part of the work of this JIRA. In addition, I have a question, do we really need {{dataFileName}} field? From the design doc(Page 39), I didn't see this field returned. {noformat} { "keyName":"palantir", "version":0, "md5hash":"e6edf9e1cb57057502cdaafa998e1426", "createdOn":"Mon, Apr 04, 2016 06:22:00 GMT ", "size":1024 } {noformat} > Ozone: Ensures listKey lists all required key fields > > > Key: HDFS-11984 > URL: https://issues.apache.org/jira/browse/HDFS-11984 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > > HDFS-11782 implements the listKey operation which only lists the basic key > fields, we need to make sure it return all required fields > # version > # md5hash > # createdOn > # size > # keyName > # dataFileName > this task is depending on the work of HDFS-11886. See more discussion [here | > https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Closed] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash closed HDFS-12193. --- > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-12193: Fix Version/s: 3.0.0-beta1 > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-12193: Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for your contribution Zoran! > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099393#comment-16099393 ] Ravi Prakash commented on HDFS-12193: - LGTM. +1. Committing shortly > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration
[ https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-11896: Attachment: HDFS-11896-007.patch Used the simulated capacities, hope this should work...It will always pass in my local might be I will be not running tests parallel Or jenkins might create some data [~shv] Sorry,again...I knew, you efforts for {{branch-2.7}} release even I am looking for this. > Non-dfsUsed will be doubled on dead node re-registration > > > Key: HDFS-11896 > URL: https://issues.apache.org/jira/browse/HDFS-11896 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Labels: release-blocker > Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, > HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, > HDFS-11896-007.patch, HDFS-11896-branch-2.7-001.patch, > HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, > HDFS-11896-branch-2.7-004.patch, HDFS-11896.patch > > > *Scenario:* > i)Make you sure you've non-dfs data. > ii) Stop Datanode > iii) wait it becomes dead > iv) now restart and check the non-dfs data -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12155) Ozone : add RocksDB support to DEBUG CLI
[ https://issues.apache.org/jira/browse/HDFS-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099377#comment-16099377 ] Weiwei Yang commented on HDFS-12155: Hi [~vagarychen], I just committed HDFS-12187, could you resume your patch for this one? Thanks > Ozone : add RocksDB support to DEBUG CLI > > > Key: HDFS-12155 > URL: https://issues.apache.org/jira/browse/HDFS-12155 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12155-HDFS-7240.001.patch, > HDFS-12155-HDFS-7240.002.patch > > > As we are migrating to replacing LevelDB with RocksDB, we should also add the > support of RocksDB to the debug cli. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12187) Ozone : add support to DEBUG CLI for ksm.db
[ https://issues.apache.org/jira/browse/HDFS-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12187: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Target Version/s: HDFS-7240 Status: Resolved (was: Patch Available) I just committed this to the feature branch, thanks a lot for the contribution [~vagarychen], and thanks for the review [~anu]. > Ozone : add support to DEBUG CLI for ksm.db > --- > > Key: HDFS-12187 > URL: https://issues.apache.org/jira/browse/HDFS-12187 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Fix For: HDFS-7240 > > Attachments: HDFS-12187-HDFS-7240.001.patch, > HDFS-12187-HDFS-7240.002.patch > > > This JIRA adds the ability to convert ksm meta data file (ksm.db) into sqlite > db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12145) Ozone: OzoneFileSystem: Ozone & KSM should support "/" delimited key names
[ https://issues.apache.org/jira/browse/HDFS-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12145: --- Attachment: HDFS-12145-HDFS-7240.006.patch Hi [~msingh] Apologies I might not comment clearly, I uploaded a v6 patch based on your v5 patch. Basically I wanted to get both non-delimited and delimited keys are covered by {{TestKeys}} class, please check and let me know if this looks good to you. Thanks a lot. > Ozone: OzoneFileSystem: Ozone & KSM should support "/" delimited key names > -- > > Key: HDFS-12145 > URL: https://issues.apache.org/jira/browse/HDFS-12145 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12145-HDFS-7240.001.patch, > HDFS-12145-HDFS-7240.002.patch, HDFS-12145-HDFS-7240.003.patch, > HDFS-12145-HDFS-7240.004.patch, HDFS-12145-HDFS-7240.005.patch, > HDFS-12145-HDFS-7240.006.patch > > > With OzoneFileSystem, key names will be delimited by "/" which is used as the > path separator. > Support should be added in KSM and Ozone to support key name with "/" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12187) Ozone : add support to DEBUG CLI for ksm.db
[ https://issues.apache.org/jira/browse/HDFS-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099362#comment-16099362 ] Weiwei Yang commented on HDFS-12187: +1, I am going to commit this shortly. Thanks [~vagarychen]. > Ozone : add support to DEBUG CLI for ksm.db > --- > > Key: HDFS-12187 > URL: https://issues.apache.org/jira/browse/HDFS-12187 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12187-HDFS-7240.001.patch, > HDFS-12187-HDFS-7240.002.patch > > > This JIRA adds the ability to convert ksm meta data file (ksm.db) into sqlite > db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12194) File and Directory metadataEquals() does incorrect comparisons for Features
[ https://issues.apache.org/jira/browse/HDFS-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12194: -- Description: When calculating snapshot diff, the metadata comparisons for Files and Directories are doing object reference equality (==) check instead of the equals() check. So, a file with the ACL set exactly same as the old one will still be flagged as changed. INodeFile and SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeFileAttributes other) { return other != null && getHeaderLong()== other.getHeaderLong() && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} INodeDirectory, SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeDirectoryAttributes other) { return other != null && getQuotaCounts().equals(other.getQuotaCounts()) && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} was: Looks like the metadata comparisons for Files and Directories are doing object reference equality instead of the equals() check. So, a file with the ACL set exactly same as the old one will still flag as changed. INodeFile and SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeFileAttributes other) { return other != null && getHeaderLong()== other.getHeaderLong() && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} INodeDirectory, SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeDirectoryAttributes other) { return other != null && getQuotaCounts().equals(other.getQuotaCounts()) && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} > File and Directory metadataEquals() does incorrect comparisons for Features > --- > > Key: HDFS-12194 > URL: https://issues.apache.org/jira/browse/HDFS-12194 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Affects Versions: 2.8.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > When calculating snapshot diff, the metadata comparisons for Files and > Directories are doing object reference equality (==) check instead of the > equals() check. So, a file with the ACL set exactly same as the old one will > still be flagged as changed. > INodeFile and SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeFileAttributes other) { > return other != null > && getHeaderLong()== other.getHeaderLong() > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} > INodeDirectory, SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeDirectoryAttributes other) { > return other != null > && getQuotaCounts().equals(other.getQuotaCounts()) > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12194) File and Directory metadataEquals() does incorrect comparisons for Features
[ https://issues.apache.org/jira/browse/HDFS-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12194: -- Summary: File and Directory metadataEquals() does incorrect comparisons for Features (was: File and Directory metadataEquals does incorrect comparisons for Features) > File and Directory metadataEquals() does incorrect comparisons for Features > --- > > Key: HDFS-12194 > URL: https://issues.apache.org/jira/browse/HDFS-12194 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Affects Versions: 2.8.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > Looks like the metadata comparisons for Files and Directories are doing > object reference equality instead of the equals() check. So, a file with the > ACL set exactly same as the old one will still flag as changed. > INodeFile and SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeFileAttributes other) { > return other != null > && getHeaderLong()== other.getHeaderLong() > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} > INodeDirectory, SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeDirectoryAttributes other) { > return other != null > && getQuotaCounts().equals(other.getQuotaCounts()) > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration
[ https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099285#comment-16099285 ] Konstantin Shvachko commented on HDFS-11896: This is still failing locally for me: {code} java.lang.AssertionError: NonDFS should include actual DN NonDFSUsed expected:<245913960448> but was:<245914312704> at org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testNonDFSUsedONDeadNodeReReg(TestDeadDatanode.java:222) {code} Seems that nonDfsUsed cannot be exactly the same at different times, because somebody is always writing to disk, including this test logging. > Non-dfsUsed will be doubled on dead node re-registration > > > Key: HDFS-11896 > URL: https://issues.apache.org/jira/browse/HDFS-11896 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Labels: release-blocker > Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, > HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, > HDFS-11896-branch-2.7-001.patch, HDFS-11896-branch-2.7-002.patch, > HDFS-11896-branch-2.7-003.patch, HDFS-11896-branch-2.7-004.patch, > HDFS-11896.patch > > > *Scenario:* > i)Make you sure you've non-dfs data. > ii) Stop Datanode > iii) wait it becomes dead > iv) now restart and check the non-dfs data -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12115) Ozone: SCM: Add queryNode RPC Call
[ https://issues.apache.org/jira/browse/HDFS-12115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12115: Attachment: HDFS-12115-HDFS-7240.009.patch rebase and updated. > Ozone: SCM: Add queryNode RPC Call > -- > > Key: HDFS-12115 > URL: https://issues.apache.org/jira/browse/HDFS-12115 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-12115-HDFS-7240.001.patch, > HDFS-12115-HDFS-7240.002.patch, HDFS-12115-HDFS-7240.003.patch, > HDFS-12115-HDFS-7240.004.patch, HDFS-12115-HDFS-7240.005.patch, > HDFS-12115-HDFS-7240.006.patch, HDFS-12115-HDFS-7240.007.patch, > HDFS-12115-HDFS-7240.008.patch, HDFS-12115-HDFS-7240.009.patch > > > Add queryNode RPC to Storage container location protocol. This allows > applications like SCM CLI to get the list of nodes in various states, like > Healthy, live or Dead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12170) Ozone: OzoneFileSystem: KSM should maintain key creation time and modification time
[ https://issues.apache.org/jira/browse/HDFS-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099235#comment-16099235 ] Chen Liang commented on HDFS-12170: --- Thanks [~msingh] for the patch! Looks good to me overall, only one thing, looks like in certain places the time is used as {{long}}, and some other places used as {{String}}. Is there a particular reason for this? can we just use one type? I personally prefer using just {{long}} type, such that we don't rely on that format specified in hadoop-common, and we can compare older/newer easily. > Ozone: OzoneFileSystem: KSM should maintain key creation time and > modification time > --- > > Key: HDFS-12170 > URL: https://issues.apache.org/jira/browse/HDFS-12170 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12170-HDFS-7240.001.patch > > > OzoneFileSystem will need modification time for files and directories created > in ozone file system. > KSM should maintain key creation time and modification time for the > individual key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12155) Ozone : add RocksDB support to DEBUG CLI
[ https://issues.apache.org/jira/browse/HDFS-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12155: -- Attachment: HDFS-12155-HDFS-7240.002.patch > Ozone : add RocksDB support to DEBUG CLI > > > Key: HDFS-12155 > URL: https://issues.apache.org/jira/browse/HDFS-12155 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12155-HDFS-7240.001.patch, > HDFS-12155-HDFS-7240.002.patch > > > As we are migrating to replacing LevelDB with RocksDB, we should also add the > support of RocksDB to the debug cli. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-12193: Status: Patch Available (was: Open) > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11920) Ozone : add key partition
[ https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11920: -- Status: In Progress (was: Patch Available) > Ozone : add key partition > - > > Key: HDFS-11920 > URL: https://issues.apache.org/jira/browse/HDFS-11920 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11920-HDFS-7240.001.patch, > HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, > HDFS-11920-HDFS-7240.004.patch > > > Currently, each key corresponds to one single SCM block, and putKey/getKey > writes/reads to this single SCM block. This works fine for keys with > reasonably small data size. However if the data is too huge, (e.g. not even > fits into a single container), then we need to be able to partition the key > data into multiple blocks, each in one container. This JIRA changes the > key-related classes to support this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11920) Ozone : add key partition
[ https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11920: -- Status: Patch Available (was: In Progress) > Ozone : add key partition > - > > Key: HDFS-11920 > URL: https://issues.apache.org/jira/browse/HDFS-11920 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11920-HDFS-7240.001.patch, > HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, > HDFS-11920-HDFS-7240.004.patch > > > Currently, each key corresponds to one single SCM block, and putKey/getKey > writes/reads to this single SCM block. This works fine for keys with > reasonably small data size. However if the data is too huge, (e.g. not even > fits into a single container), then we need to be able to partition the key > data into multiple blocks, each in one container. This JIRA changes the > key-related classes to support this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12194) File and Directory metadataEquals does incorrect comparisons for Features
[ https://issues.apache.org/jira/browse/HDFS-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099194#comment-16099194 ] Manoj Govindassamy commented on HDFS-12194: --- Adding [~yzhangal], [~jojochuang], [~jingzhao] for discussion. > File and Directory metadataEquals does incorrect comparisons for Features > - > > Key: HDFS-12194 > URL: https://issues.apache.org/jira/browse/HDFS-12194 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Affects Versions: 2.8.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > Looks like the metadata comparisons for Files and Directories are doing > object reference equality instead of the equals() check. So, a file with the > ACL set exactly same as the old one will still flag as changed. > INodeFile and SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeFileAttributes other) { > return other != null > && getHeaderLong()== other.getHeaderLong() > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} > INodeDirectory, SnapshotCopy #metadataEquals() > {noformat} > @Override > public boolean metadataEquals(INodeDirectoryAttributes other) { > return other != null > && getQuotaCounts().equals(other.getQuotaCounts()) > && getPermissionLong() == other.getPermissionLong() > && getAclFeature() == other.getAclFeature() > && getXAttrFeature() == other.getXAttrFeature(); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12194) File and Directory metadataEquals does incorrect comparisons for Features
Manoj Govindassamy created HDFS-12194: - Summary: File and Directory metadataEquals does incorrect comparisons for Features Key: HDFS-12194 URL: https://issues.apache.org/jira/browse/HDFS-12194 Project: Hadoop HDFS Issue Type: Bug Components: snapshots Affects Versions: 2.8.0 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy Looks like the metadata comparisons for Files and Directories are doing object reference equality instead of the equals() check. So, a file with the ACL set exactly same as the old one will still flag as changed. INodeFile and SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeFileAttributes other) { return other != null && getHeaderLong()== other.getHeaderLong() && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} INodeDirectory, SnapshotCopy #metadataEquals() {noformat} @Override public boolean metadataEquals(INodeDirectoryAttributes other) { return other != null && getQuotaCounts().equals(other.getQuotaCounts()) && getPermissionLong() == other.getPermissionLong() && getAclFeature() == other.getAclFeature() && getXAttrFeature() == other.getXAttrFeature(); } {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12193) Fix style issues in HttpFS tests
[ https://issues.apache.org/jira/browse/HDFS-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoran Dimitrijevic updated HDFS-12193: -- Attachment: HDFS-12193.patch > Fix style issues in HttpFS tests > > > Key: HDFS-12193 > URL: https://issues.apache.org/jira/browse/HDFS-12193 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.0.0-beta1 >Reporter: Zoran Dimitrijevic >Assignee: Zoran Dimitrijevic >Priority: Trivial > Attachments: HDFS-12193.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > While refactoring httpfs tests for HDFS-12052 I've noticed many style issues > that are easy to fix, but should not be fixed in the same patch when we are > fixing the bug in the code. > I've been asked by at least two committers to create a separate patch which > will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12193) Fix style issues in HttpFS tests
Zoran Dimitrijevic created HDFS-12193: - Summary: Fix style issues in HttpFS tests Key: HDFS-12193 URL: https://issues.apache.org/jira/browse/HDFS-12193 Project: Hadoop HDFS Issue Type: Improvement Components: httpfs Affects Versions: 3.0.0-beta1 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Trivial While refactoring httpfs tests for HDFS-12052 I've noticed many style issues that are easy to fix, but should not be fixed in the same patch when we are fixing the bug in the code. I've been asked by at least two committers to create a separate patch which will only cover these trivial style fixes. So, here it is. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12090) Handling writes from HDFS to Provided storages
[ https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099178#comment-16099178 ] Virajith Jalaparti commented on HDFS-12090: --- Hi [~rakeshr], Sorry about the delayed response! bq. it looks to me that user has to set the PROVIDED storage policy explicitly. This is the case only if {{-createMountOnly}} is specified. If not, the policy is automatically set, and the data moves are initiated in the Namenode (using SPS). bq. I thought of passing another optional argument -storagePolicy to the mount cmd and user get the chance to pass the desired policies That's a good idea. We didn't really think about different types of {{PROVIDED}} policies (e.g. as you mentioned, {{DISK:2, PROVIDED:1}}, {{SSD:1, PROVIDED:1}}) but I think this makes sense. We can add this in. bq. So, this requires user intervention to configure the volume details and reload data volume, right? Not necessarily. Once the mount is setup on the Namenode, it can instruct the datanodes to load the volume required for the mount. However, we would need to know what volume should be mounted (can be specified by a configuration parameter or as part of the mount command), and which datanodes should take part in this process. bq. Secondly, are you saying that user mount Vs volume is one-to-one mapping(I meant, for each mount point admin need to define a unique volume)?. IMHO, this can be one-to-many mapping. I have been thinking about this as a 1-1 mapping. So, each mount point will have a different volume (on the Datanodes). This makes it easier to manage things like credentials to access the remote store as different mount points can belong to different remote storage accounts. In a one-to-many mapping, these would have to be specifically managed within the volume. Do you have any particular use-case/scenario in mind where a one to mapping might be better/more performant? > Handling writes from HDFS to Provided storages > -- > > Key: HDFS-12090 > URL: https://issues.apache.org/jira/browse/HDFS-12090 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Virajith Jalaparti > Attachments: HDFS-12090-design.001.pdf > > > HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in > external storage systems accessible through HDFS. However, HDFS-9806 is > limited to data being read through HDFS. This JIRA will deal with how data > can be written to such {{PROVIDED}} storages from HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time
[ https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099116#comment-16099116 ] Chen Liang commented on HDFS-12190: --- Looks like v001 patch needs to be rebased. > Enable 'hdfs dfs -stat' to display access time > -- > > Key: HDFS-12190 > URL: https://issues.apache.org/jira/browse/HDFS-12190 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, shell >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HDFS-12190.001.patch > > > "hdfs dfs -stat" currently only can show modification time of a file but not > access time. Sometimes it's useful to show access time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12187) Ozone : add support to DEBUG CLI for ksm.db
[ https://issues.apache.org/jira/browse/HDFS-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099063#comment-16099063 ] Hadoop QA commented on HDFS-12187: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestRatisManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12187 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878660/HDFS-12187-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 32ab25b6460a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / c539095 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20400/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20400/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20400/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone : add support to DEBUG CLI for ksm.db > --- > > Key: HDFS-12187 > URL: https://issues.apache.org/jira/browse/HDFS-12187 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12187-HDFS-7240.001.patch, > HDFS-12187-HDFS-7240.002.patch >
[jira] [Commented] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side
[ https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098989#comment-16098989 ] Nandakumar commented on HDFS-12178: --- Thanks [~vagarychen] for the review. > Curious though, is there a case where multiple threads will try to create the > same container? Since we are not checking {{keyInfo.getShouldCreateContainer()}} for container creation, all {{createKey}} call will try to create container (if not in local cache {{containersCreated}}). And tools like Corona will start multiple threads to put data into ozone, in such scenarios multiple threads will try to create same container. > createContainer can fail, but it already gets added to containersCreated set > to this point It is assumed that {{ContainerProtocolCalls.createContainer(xceiverClient, requestId)}} call will fail most of the time, since we will be trying to create an already existing container. If we switch the order of those two statements, we will never be adding containerName to the cache since there will always be exception thrown during createContainer call (in case of existing container), and all the {{createKey}} call will always try to create container which is same as not having {{containersCreated}} cache. >or simply move containersCreated.add() to after the try-catch? This will work, but it doesn't make much of a difference, whether we have it as first statement in try or move it totally outside of try-catch block. Both will behave the same way. > Ozone: OzoneClient: Handling SCM container creationFlag at client side > -- > > Key: HDFS-12178 > URL: https://issues.apache.org/jira/browse/HDFS-12178 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12178-HDFS-7240.000.patch, > HDFS-12178-HDFS-7240.001.patch > > > SCM BlockManager provisions a pool of containers upon block creation request, > only one container is returned with creationFlag to the client. The other > containers provisioned in the same batch will not have this flag. This jira > is to handle that scenario at client side, until HDFS-11888 is fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side
[ https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098930#comment-16098930 ] Chen Liang commented on HDFS-12178: --- Thanks [~nandakumar131] for the patch! Nice catch on the multi-threading case. Curious though, is there a case where multiple threads will try to create the same container? And for these two lines: {code} containersCreated.add(containerName); ContainerProtocolCalls.createContainer(xceiverClient, requestId); {code} {{createContainer}} can fail, but it already gets added to containersCreated set to this point. So how about switch the order of the two calls, or simply move {{containersCreated.add()}} to after the try-catch? > Ozone: OzoneClient: Handling SCM container creationFlag at client side > -- > > Key: HDFS-12178 > URL: https://issues.apache.org/jira/browse/HDFS-12178 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12178-HDFS-7240.000.patch, > HDFS-12178-HDFS-7240.001.patch > > > SCM BlockManager provisions a pool of containers upon block creation request, > only one container is returned with creationFlag to the client. The other > containers provisioned in the same batch will not have this flag. This jira > is to handle that scenario at client side, until HDFS-11888 is fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12049) Recommissioning live nodes stalls the NN
[ https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098926#comment-16098926 ] Daryn Sharp commented on HDFS-12049: I'm ok with it. > Recommissioning live nodes stalls the NN > > > Key: HDFS-12049 > URL: https://issues.apache.org/jira/browse/HDFS-12049 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Priority: Critical > > A node refresh will recommission included nodes that are alive and in > decommissioning or decommissioned state. The recommission will scan all > blocks on the node, find over replicated blocks, chose an excess, queue an > invalidate. > The process is expensive and worsened by overhead of storage types (even when > not in use). It can be especially devastating because the write lock is held > for the entire node refresh. _Recommissioning 67 nodes with ~500k > blocks/node stalled rpc services for over 4 mins._ -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12187) Ozone : add support to DEBUG CLI for ksm.db
[ https://issues.apache.org/jira/browse/HDFS-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12187: -- Attachment: HDFS-12187-HDFS-7240.002.patch Thanks [~cheersyang] for the catch, post v002 patch. > Ozone : add support to DEBUG CLI for ksm.db > --- > > Key: HDFS-12187 > URL: https://issues.apache.org/jira/browse/HDFS-12187 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12187-HDFS-7240.001.patch, > HDFS-12187-HDFS-7240.002.patch > > > This JIRA adds the ability to convert ksm meta data file (ksm.db) into sqlite > db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block
[ https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098783#comment-16098783 ] Hadoop QA commented on HDFS-12102: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 22 new + 465 unchanged - 0 fixed = 487 total (was 465) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.datanode.VolumeScanner.currentScanPeriod; locked 66% of time Unsynchronized access at VolumeScanner.java:66% of time Unsynchronized access at VolumeScanner.java:[line 585] | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.datanode.VolumeScanner.restartScan; locked 66% of time Unsynchronized access at VolumeScanner.java:66% of time Unsynchronized access at VolumeScanner.java:[line 489] | | Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12102 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878635/HDFS-12102-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 024dddcaa353 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks
[ https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098570#comment-16098570 ] Wellington Chevreuil commented on HDFS-12182: - Hi [~jojochuang], thanks a lot for the review. Attached new patch with the new code inside the syncrhonized block, and a test case for this new behaviour as well. > BlockManager.metaSave does not distinguish between "under replicated" and > "missing" blocks > -- > > Key: HDFS-12182 > URL: https://issues.apache.org/jira/browse/HDFS-12182 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Trivial > Labels: newbie > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch > > > Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs > CLI command) reports both "under replicated" and "missing" blocks under same > metric *Metasave: Blocks waiting for reconstruction:* as shown on below code > snippet: > {noformat} >synchronized (neededReconstruction) { > out.println("Metasave: Blocks waiting for reconstruction: " > + neededReconstruction.size()); > for (Block block : neededReconstruction) { > dumpBlockMeta(block, out); > } > } > {noformat} > *neededReconstruction* is an instance of *LowRedundancyBlocks*, which > actually wraps 5 priority queues currently. 4 of these queues store different > under replicated scenarios, but the 5th one is dedicated for corrupt/missing > blocks. > Thus, metasave report may suggest some corrupt blocks are just under > replicated. This can be misleading for admins and operators trying to track > block missing/corruption issues, and/or other issues related to > *BlockManager* metrics. > I would like to propose a patch with trivial changes that would report > corrupt blocks separately. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks
[ https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-12182: Attachment: HDFS-12182.002.patch Attaching new patch version with the suggested changes. > BlockManager.metaSave does not distinguish between "under replicated" and > "missing" blocks > -- > > Key: HDFS-12182 > URL: https://issues.apache.org/jira/browse/HDFS-12182 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Trivial > Labels: newbie > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch > > > Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs > CLI command) reports both "under replicated" and "missing" blocks under same > metric *Metasave: Blocks waiting for reconstruction:* as shown on below code > snippet: > {noformat} >synchronized (neededReconstruction) { > out.println("Metasave: Blocks waiting for reconstruction: " > + neededReconstruction.size()); > for (Block block : neededReconstruction) { > dumpBlockMeta(block, out); > } > } > {noformat} > *neededReconstruction* is an instance of *LowRedundancyBlocks*, which > actually wraps 5 priority queues currently. 4 of these queues store different > under replicated scenarios, but the 5th one is dedicated for corrupt/missing > blocks. > Thus, metasave report may suggest some corrupt blocks are just under > replicated. This can be misleading for admins and operators trying to track > block missing/corruption issues, and/or other issues related to > *BlockManager* metrics. > I would like to propose a patch with trivial changes that would report > corrupt blocks separately. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block
[ https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashwin Ramesh updated HDFS-12102: - Attachment: HDFS-12102-003.patch > VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt > block > > > Key: HDFS-12102 > URL: https://issues.apache.org/jira/browse/HDFS-12102 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Affects Versions: 2.8.2 >Reporter: Ashwin Ramesh >Priority: Minor > Fix For: 2.8.2 > > Attachments: HDFS-12102-001.patch, HDFS-12102-002.patch, > HDFS-12102-003.patch > > > When the Volume scanner sees a corrupt block, it restarts the scan and scans > the blocks at much faster rate with a negligible scan period. This is so that > it doesn't take 3 weeks to report blocks since a corrupt block means > increased likelihood that there are more corrupt blocks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
[ https://issues.apache.org/jira/browse/HDFS-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098515#comment-16098515 ] Yiqun Lin commented on HDFS-12192: -- The failure tests are not related. > Ozone: Fix the remaining failure tests for Windows caused by incorrect path > generated > - > > Key: HDFS-12192 > URL: https://issues.apache.org/jira/browse/HDFS-12192 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12192-HDFS-7240.001.patch > > > Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, > these are some places missing being updated in HDFS-11831. > One error stack info: > {noformat} > java.nio.file.InvalidPathException: Illegal char <:> at index 2: > /D:/work-project/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine\TestDatanodeStateMachine.id > at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) > at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) > at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) > at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94) > at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255) > at java.nio.file.Paths.get(Paths.java:84) > at > org.apache.hadoop.ozone.container.common.TestDatanodeStateMachine.setUp(TestDatanodeStateMachine.java:108) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
[ https://issues.apache.org/jira/browse/HDFS-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12192: - Description: Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, these are some places missing being updated in HDFS-11831. One error stack info: {noformat} java.nio.file.InvalidPathException: Illegal char <:> at index 2: /D:/work-project/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine\TestDatanodeStateMachine.id at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94) at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255) at java.nio.file.Paths.get(Paths.java:84) at org.apache.hadoop.ozone.container.common.TestDatanodeStateMachine.setUp(TestDatanodeStateMachine.java:108) {noformat} was:Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, these are some places missing being updated in HDFS-11831. > Ozone: Fix the remaining failure tests for Windows caused by incorrect path > generated > - > > Key: HDFS-12192 > URL: https://issues.apache.org/jira/browse/HDFS-12192 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12192-HDFS-7240.001.patch > > > Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, > these are some places missing being updated in HDFS-11831. > One error stack info: > {noformat} > java.nio.file.InvalidPathException: Illegal char <:> at index 2: > /D:/work-project/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine\TestDatanodeStateMachine.id > at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) > at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) > at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) > at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94) > at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255) > at java.nio.file.Paths.get(Paths.java:84) > at > org.apache.hadoop.ozone.container.common.TestDatanodeStateMachine.setUp(TestDatanodeStateMachine.java:108) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
[ https://issues.apache.org/jira/browse/HDFS-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098466#comment-16098466 ] Hadoop QA commented on HDFS-12192: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12192 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878610/HDFS-12192-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d82aceb2386a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / c539095 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20397/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20397/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20397/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Fix the remaining failure tests for Windows caused by incorrect path > generated > - > >
[jira] [Updated] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration
[ https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-11896: Attachment: HDFS-11896-006.patch Hope this should work.ORelse i should use simulate capacities.. Sorry, as i can't expect jenkins data, it was failing earlier. > Non-dfsUsed will be doubled on dead node re-registration > > > Key: HDFS-11896 > URL: https://issues.apache.org/jira/browse/HDFS-11896 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Labels: release-blocker > Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, > HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, > HDFS-11896-branch-2.7-001.patch, HDFS-11896-branch-2.7-002.patch, > HDFS-11896-branch-2.7-003.patch, HDFS-11896-branch-2.7-004.patch, > HDFS-11896.patch > > > *Scenario:* > i)Make you sure you've non-dfs data. > ii) Stop Datanode > iii) wait it becomes dead > iv) now restart and check the non-dfs data -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
[ https://issues.apache.org/jira/browse/HDFS-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12192: - Attachment: HDFS-12192-HDFS-7240.001.patch Attach the patch. > Ozone: Fix the remaining failure tests for Windows caused by incorrect path > generated > - > > Key: HDFS-12192 > URL: https://issues.apache.org/jira/browse/HDFS-12192 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12192-HDFS-7240.001.patch > > > Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, > these are some places missing being updated in HDFS-11831. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
[ https://issues.apache.org/jira/browse/HDFS-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12192: - Status: Patch Available (was: Open) > Ozone: Fix the remaining failure tests for Windows caused by incorrect path > generated > - > > Key: HDFS-12192 > URL: https://issues.apache.org/jira/browse/HDFS-12192 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12192-HDFS-7240.001.patch > > > Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, > these are some places missing being updated in HDFS-11831. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12192) Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated
Yiqun Lin created HDFS-12192: Summary: Ozone: Fix the remaining failure tests for Windows caused by incorrect path generated Key: HDFS-12192 URL: https://issues.apache.org/jira/browse/HDFS-12192 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone, test Affects Versions: HDFS-7240 Reporter: Yiqun Lin Assignee: Yiqun Lin Found some unit tests ran failed in Windows, similar to HDFS-11831. Actually, these are some places missing being updated in HDFS-11831. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side
[ https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098274#comment-16098274 ] Hadoop QA commented on HDFS-12178: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.ozone.scm.node.TestNodeManager | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.ozone.container.ozoneimpl.TestRatisManager | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12178 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878596/HDFS-12178-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux dae7e0b2471f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / c539095 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20396/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20396/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output
[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode
[ https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098179#comment-16098179 ] Hadoop QA commented on HDFS-10285: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 5s{color} | {color:orange} hadoop-hdfs-project: The patch generated 11 new + 1967 unchanged - 2 fixed = 1978 total (was 1969) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-10285 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12877947/HDFS-10285-consolidated-merge-patch-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc xml | | uname | Linux 534ca945263b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side
[ https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098165#comment-16098165 ] Nandakumar commented on HDFS-12178: --- patch v01 attached. {{!containersCreated.contains(containerName)}} check is done before {{synchronized (containerName.intern())}}, since {{String#intern()}} is a costly operation. {{!containersCreated.contains(containerName)}} is again checked inside synchronized block so that no two threads try to create same container again. Benchmarking of String#intern() for reference: [Performance penalty of String.intern() | https://stackoverflow.com/a/10628759/2170141] > Ozone: OzoneClient: Handling SCM container creationFlag at client side > -- > > Key: HDFS-12178 > URL: https://issues.apache.org/jira/browse/HDFS-12178 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12178-HDFS-7240.000.patch, > HDFS-12178-HDFS-7240.001.patch > > > SCM BlockManager provisions a pool of containers upon block creation request, > only one container is returned with creationFlag to the client. The other > containers provisioned in the same batch will not have this flag. This jira > is to handle that scenario at client side, until HDFS-11888 is fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side
[ https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12178: -- Attachment: HDFS-12178-HDFS-7240.001.patch > Ozone: OzoneClient: Handling SCM container creationFlag at client side > -- > > Key: HDFS-12178 > URL: https://issues.apache.org/jira/browse/HDFS-12178 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12178-HDFS-7240.000.patch, > HDFS-12178-HDFS-7240.001.patch > > > SCM BlockManager provisions a pool of containers upon block creation request, > only one container is returned with creationFlag to the client. The other > containers provisioned in the same batch will not have this flag. This jira > is to handle that scenario at client side, until HDFS-11888 is fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12145) Ozone: OzoneFileSystem: Ozone & KSM should support "/" delimited key names
[ https://issues.apache.org/jira/browse/HDFS-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098079#comment-16098079 ] Hadoop QA commented on HDFS-12145: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.scm.TestContainerSQLCli | | | hadoop.ozone.container.ozoneimpl.TestRatisManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12145 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878574/HDFS-12145-HDFS-7240.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 486e5af90fe0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / c539095 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20394/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20394/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output |
[jira] [Commented] (HDFS-12176) dfsadmin shows DFS Used%: NaN% if the cluster has zero block.
[ https://issues.apache.org/jira/browse/HDFS-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098045#comment-16098045 ] Hudson commented on HDFS-12176: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12048 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12048/]) HDFS-12176. dfsadmin shows DFS Used%: NaN% if the cluster has zero (aajisaka: rev 770cc462281518545e3d1c0f8c21cf9ec9673200) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java > dfsadmin shows DFS Used%: NaN% if the cluster has zero block. > - > > Key: HDFS-12176 > URL: https://issues.apache.org/jira/browse/HDFS-12176 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Weiwei Yang >Priority: Trivial > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-12176.001.patch > > > This is rather a non-issue, but thought I should file it anyway. > I have a test cluster with just NN fsimage, no DN, no blocks, and dfsadmin > shows: > {noformat} > $ hdfs dfsadmin -report > Configured Capacity: 0 (0 B) > Present Capacity: 0 (0 B) > DFS Remaining: 0 (0 B) > DFS Used: 0 (0 B) > DFS Used%: NaN% > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12176) dfsadmin shows DFS Used%: NaN% if the cluster has zero block.
[ https://issues.apache.org/jira/browse/HDFS-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098026#comment-16098026 ] Weiwei Yang commented on HDFS-12176: Thank you [~ajisakaa]! > dfsadmin shows DFS Used%: NaN% if the cluster has zero block. > - > > Key: HDFS-12176 > URL: https://issues.apache.org/jira/browse/HDFS-12176 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Weiwei Yang >Priority: Trivial > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-12176.001.patch > > > This is rather a non-issue, but thought I should file it anyway. > I have a test cluster with just NN fsimage, no DN, no blocks, and dfsadmin > shows: > {noformat} > $ hdfs dfsadmin -report > Configured Capacity: 0 (0 B) > Present Capacity: 0 (0 B) > DFS Remaining: 0 (0 B) > DFS Used: 0 (0 B) > DFS Used%: NaN% > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12176) dfsadmin shows DFS Used%: NaN% if the cluster has zero block.
[ https://issues.apache.org/jira/browse/HDFS-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12176: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Committed this to trunk and branch-2. Thanks [~cheersyang] for the contribution and thanks [~hanishakoneru] and [~jojochuang] for the review. > dfsadmin shows DFS Used%: NaN% if the cluster has zero block. > - > > Key: HDFS-12176 > URL: https://issues.apache.org/jira/browse/HDFS-12176 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Weiwei Yang >Priority: Trivial > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-12176.001.patch > > > This is rather a non-issue, but thought I should file it anyway. > I have a test cluster with just NN fsimage, no DN, no blocks, and dfsadmin > shows: > {noformat} > $ hdfs dfsadmin -report > Configured Capacity: 0 (0 B) > Present Capacity: 0 (0 B) > DFS Remaining: 0 (0 B) > DFS Used: 0 (0 B) > DFS Used%: NaN% > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12176) dfsadmin shows DFS Used%: NaN% if the cluster has zero block.
[ https://issues.apache.org/jira/browse/HDFS-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098021#comment-16098021 ] Akira Ajisaka commented on HDFS-12176: -- +1, checking this in. > dfsadmin shows DFS Used%: NaN% if the cluster has zero block. > - > > Key: HDFS-12176 > URL: https://issues.apache.org/jira/browse/HDFS-12176 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Weiwei Yang >Priority: Trivial > Attachments: HDFS-12176.001.patch > > > This is rather a non-issue, but thought I should file it anyway. > I have a test cluster with just NN fsimage, no DN, no blocks, and dfsadmin > shows: > {noformat} > $ hdfs dfsadmin -report > Configured Capacity: 0 (0 B) > Present Capacity: 0 (0 B) > DFS Remaining: 0 (0 B) > DFS Used: 0 (0 B) > DFS Used%: NaN% > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12145) Ozone: OzoneFileSystem: Ozone & KSM should support "/" delimited key names
[ https://issues.apache.org/jira/browse/HDFS-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12145: - Attachment: HDFS-12145-HDFS-7240.005.patch > Ozone: OzoneFileSystem: Ozone & KSM should support "/" delimited key names > -- > > Key: HDFS-12145 > URL: https://issues.apache.org/jira/browse/HDFS-12145 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12145-HDFS-7240.001.patch, > HDFS-12145-HDFS-7240.002.patch, HDFS-12145-HDFS-7240.003.patch, > HDFS-12145-HDFS-7240.004.patch, HDFS-12145-HDFS-7240.005.patch > > > With OzoneFileSystem, key names will be delimited by "/" which is used as the > path separator. > Support should be added in KSM and Ozone to support key name with "/" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org