[jira] [Commented] (HDFS-15223) FSCK fails if one namenode is not available
[ https://issues.apache.org/jira/browse/HDFS-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062298#comment-17062298 ] Akira Ajisaka commented on HDFS-15223: -- +1, thanks. > FSCK fails if one namenode is not available > --- > > Key: HDFS-15223 > URL: https://issues.apache.org/jira/browse/HDFS-15223 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15223-01.patch, HDFS-15223-02.patch > > > If one namenode is not available FSCK should try on other namenode, ignoring > the namenode not available -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15229) Truncate info should be logged at INFO level
[ https://issues.apache.org/jira/browse/HDFS-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HDFS-15229: -- Attachment: (was: HDFS-15229.001.patch) > Truncate info should be logged at INFO level > - > > Key: HDFS-15229 > URL: https://issues.apache.org/jira/browse/HDFS-15229 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > > In NN log and audit log, we can't find the truncate size. > Logs related to Truncate are captured at DEBUG Level and it is important that > NN should log the newLength of truncate. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HDFS-15196: -- Attachment: HDFS-15196.009.patch > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062226#comment-17062226 ] Yang Yun commented on HDFS-15221: - Thanks [~elgoiri] for the review. Updated patch to HDFS-15221-009.patch with following changes. * Modify the text accoring to your comment, thanks! * Remove the limitation of lower case, use the original value, For example "dfs.datanode.storagetype.ARCHIVE.filesystem". > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221-009.patch, > HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15221: Attachment: HDFS-15221-009.patch Status: Patch Available (was: Open) > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221-009.patch, > HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15221: Status: Open (was: Patch Available) > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221-009.patch, > HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15229) Truncate info should be logged at INFO level
[ https://issues.apache.org/jira/browse/HDFS-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HDFS-15229: -- Attachment: HDFS-15229.001.patch > Truncate info should be logged at INFO level > - > > Key: HDFS-15229 > URL: https://issues.apache.org/jira/browse/HDFS-15229 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HDFS-15229.001.patch > > > In NN log and audit log, we can't find the truncate size. > Logs related to Truncate are captured at DEBUG Level and it is important that > NN should log the newLength of truncate. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15223) FSCK fails if one namenode is not available
[ https://issues.apache.org/jira/browse/HDFS-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062060#comment-17062060 ] Íñigo Goiri commented on HDFS-15223: I was waiting for Yetus but I guess it never came. The changes from [^HDFS-15223-01.patch] to [^HDFS-15223-02.patch] are just style so it should be fine. +1 on [^HDFS-15223-02.patch]. > FSCK fails if one namenode is not available > --- > > Key: HDFS-15223 > URL: https://issues.apache.org/jira/browse/HDFS-15223 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15223-01.patch, HDFS-15223-02.patch > > > If one namenode is not available FSCK should try on other namenode, ignoring > the namenode not available -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HDFS-15196: -- Attachment: (was: HDFS-15196.008.patch) > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HDFS-15196: -- Attachment: HDFS-15196.008.patch > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062053#comment-17062053 ] Fengnan Li commented on HDFS-15196: --- [~elgoiri] I am not sure it is related since the failure is TestErasureCoding. I have tested several times locally but couldn't replicate the error. [^HDFS-15196.007.patch] doesn't have this error either. Let me re-upload the [^HDFS-15196.008.patch] to trigger another build. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061982#comment-17061982 ] hemanthboyina commented on HDFS-15214: -- [~elgoiri] can you review the [^HDFS-15214.004.patch] > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15208) Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs
[ https://issues.apache.org/jira/browse/HDFS-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-15208. Fix Version/s: 3.2.2 3.1.4 Resolution: Fixed > Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs > -- > > Key: HDFS-15208 > URL: https://issues.apache.org/jira/browse/HDFS-15208 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Fix For: 3.3.0, 3.1.4, 3.2.2 > > > Continuation of HADOOP-15686 > Add the same log4j property to disable error log in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14919) Provide Non DFS Used per DataNode in DataNode UI
[ https://issues.apache.org/jira/browse/HDFS-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061847#comment-17061847 ] Hudson commented on HDFS-14919: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18065 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18065/]) HDFS-14919. Provide Non DFS Used per DataNode in DataNode UI. (ayushsaxena: rev 654db35fa2a2bfabd8f844e9ca10ad8bfea859cf) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html > Provide Non DFS Used per DataNode in DataNode UI > > > Key: HDFS-14919 > URL: https://issues.apache.org/jira/browse/HDFS-14919 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14919.001.patch, HDFS-14919.002.patch, > HDFS-14919.003.patch, hadoop2.6_datanode_ui.png, hadoop3.1_datanode_ui.png, > screenshot-1.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061831#comment-17061831 ] Ayush Saxena commented on HDFS-15154: - Thanx [~swagle] for the patch. Couldn't actually check the code, but had a cursory look. Yes, its better we get [~arp]'s opinion before progressing here. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15200) Delete Corrupt Replica Immediately Irrespective of Replicas On Stale Storage
[ https://issues.apache.org/jira/browse/HDFS-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061827#comment-17061827 ] Ayush Saxena commented on HDFS-15200: - Thanx Everyone, If no further comments will push this by tomorrow EOD. > Delete Corrupt Replica Immediately Irrespective of Replicas On Stale Storage > - > > Key: HDFS-15200 > URL: https://issues.apache.org/jira/browse/HDFS-15200 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Attachments: HDFS-15200-01.patch, HDFS-15200-02.patch, > HDFS-15200-03.patch, HDFS-15200-04.patch, HDFS-15200-05.patch > > > Presently {{invalidateBlock(..)}} before adding a replica into invalidates, > checks whether any block replica is on stale storage, if any replica is on > stale storage, it postpones deletion of the replica. > Here : > {code:java} >// Check how many copies we have of the block > if (nr.replicasOnStaleNodes() > 0) { > blockLog.debug("BLOCK* invalidateBlocks: postponing " + > "invalidation of {} on {} because {} replica(s) are located on " + > "nodes with potentially out-of-date block reports", b, dn, > nr.replicasOnStaleNodes()); > postponeBlock(b.getCorrupted()); > return false; > {code} > > In case of corrupt replica, we can skip this logic and delete the corrupt > replica immediately, as a corrupt replica can't get corrected. > One outcome of this behavior presently is namenodes showing different block > states post failover, as: > If a replica is marked corrupt, the Active NN, will mark it as corrupt, and > mark it for deletion and remove it from corruptReplica's and > excessRedundancyMap. > If before the deletion of replica, Failover happens. > The standby Namenode will mark all the storages as stale. > Then will start processing IBR's, Now since the replica's would be on stale > storage, it will skip deletion, and removal from corruptReplica's > Hence both the namenode will show different numbers and different corrupt > replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15223) FSCK fails if one namenode is not available
[ https://issues.apache.org/jira/browse/HDFS-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061826#comment-17061826 ] Ayush Saxena commented on HDFS-15223: - [~elgoiri] can you help review v2.. > FSCK fails if one namenode is not available > --- > > Key: HDFS-15223 > URL: https://issues.apache.org/jira/browse/HDFS-15223 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15223-01.patch, HDFS-15223-02.patch > > > If one namenode is not available FSCK should try on other namenode, ignoring > the namenode not available -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061821#comment-17061821 ] Íñigo Goiri commented on HDFS-15196: The failed test in TestRouterRpc looks suspicious. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14919) Provide Non DFS Used per DataNode in DataNode UI
[ https://issues.apache.org/jira/browse/HDFS-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-14919: Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanx [~leosun08] for the contribution and [~elgoiri] for the review!!! > Provide Non DFS Used per DataNode in DataNode UI > > > Key: HDFS-14919 > URL: https://issues.apache.org/jira/browse/HDFS-14919 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14919.001.patch, HDFS-14919.002.patch, > HDFS-14919.003.patch, hadoop2.6_datanode_ui.png, hadoop3.1_datanode_ui.png, > screenshot-1.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061820#comment-17061820 ] Íñigo Goiri commented on HDFS-15221: Sorry for asking for so many changes but I think we are almost there. The essence of the text is correct; a few minor grammar/text changes: {code} Sometimes, users can setup the DataNode data directory to point to multiple volumes with different storage types. It is important to check if the volume is mounted correctly before initializing the storage locations. The user has the option to enforce the filesystem for a storage key with the following key: {code} Regarding the key, itself, I think is confusing to use the lower case, I think it should be something like "dfs.datanode.storagetype.ARCHIVE.filesystem". Otherwise it can be tough to follow. > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15208) Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs
[ https://issues.apache.org/jira/browse/HDFS-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061738#comment-17061738 ] Hudson commented on HDFS-15208: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18062 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18062/]) HDFS-15208. Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS (github: rev 096533c2dc0afd51367030725d797480a22ba7e2) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties > Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs > -- > > Key: HDFS-15208 > URL: https://issues.apache.org/jira/browse/HDFS-15208 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Fix For: 3.3.0 > > > Continuation of HADOOP-15686 > Add the same log4j property to disable error log in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15208) Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs
[ https://issues.apache.org/jira/browse/HDFS-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-15208: -- Affects Version/s: 3.0.0 > Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs > -- > > Key: HDFS-15208 > URL: https://issues.apache.org/jira/browse/HDFS-15208 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > Continuation of HADOOP-15686 > Add the same log4j property to disable error log in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15208) Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs
[ https://issues.apache.org/jira/browse/HDFS-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-15208: -- Summary: Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs (was: Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs) > Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs > -- > > Key: HDFS-15208 > URL: https://issues.apache.org/jira/browse/HDFS-15208 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > Continuation of HADOOP-15686 > Add the same log4j property to disable error log in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15208) Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs
[ https://issues.apache.org/jira/browse/HDFS-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-15208: -- Fix Version/s: 3.3.0 > Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs > -- > > Key: HDFS-15208 > URL: https://issues.apache.org/jira/browse/HDFS-15208 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Fix For: 3.3.0 > > > Continuation of HADOOP-15686 > Add the same log4j property to disable error log in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15229) Truncate info should be logged at INFO level
Ravuri Sushma sree created HDFS-15229: - Summary: Truncate info should be logged at INFO level Key: HDFS-15229 URL: https://issues.apache.org/jira/browse/HDFS-15229 Project: Hadoop HDFS Issue Type: Bug Reporter: Ravuri Sushma sree Assignee: Ravuri Sushma sree In NN log and audit log, we can't find the truncate size. Logs related to Truncate are captured at DEBUG Level and it is important that NN should log the newLength of truncate. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061674#comment-17061674 ] Hadoop QA commented on HDFS-15221: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 7s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 33s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}281m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestMultiThreadedHflush | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.TestPipelines | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HDFS-15221 | | JIRA Patch URL |
[jira] [Commented] (HDFS-15228) Cannot rename file with space in name
[ https://issues.apache.org/jira/browse/HDFS-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061582#comment-17061582 ] Dylan Werner-Meier commented on HDFS-15228: --- Hello [~hemanthboyina], After some more tests, I've provided a Maven's pom.xml to reproduce the bug exactly. For some reason, I have the bug in hadoop hdfs client 3.1.2 and 3.2.0, but the bug is not present in 3.2.1. I'll continue investiguating, but if the bug is already fixed, I guess this issue can be closed. > Cannot rename file with space in name > - > > Key: HDFS-15228 > URL: https://issues.apache.org/jira/browse/HDFS-15228 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.2.0 > Environment: Oracle jdk1.8.161 >Reporter: Dylan Werner-Meier >Priority: Major > Attachments: TestWithStrangeFilenames.java, pom.xml > > > Hello, > While using webhdfs, I encountered a strange bug where I just cannot rename a > file if it has a space in the filename. > It seems strange to me, is there anything I am missing ? > > Edit: After some debugging, it seems to be linked with the way spaces are > encoded the webhdfs url: the JDK's URLEncoder uses '+' to encode spaces, > whereas a CURL command where the filename is encoded with '%20' for spaces > works just fine. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15228) Cannot rename file with space in name
[ https://issues.apache.org/jira/browse/HDFS-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dylan Werner-Meier updated HDFS-15228: -- Attachment: pom.xml > Cannot rename file with space in name > - > > Key: HDFS-15228 > URL: https://issues.apache.org/jira/browse/HDFS-15228 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.2.0 > Environment: Oracle jdk1.8.161 >Reporter: Dylan Werner-Meier >Priority: Major > Attachments: TestWithStrangeFilenames.java, pom.xml > > > Hello, > While using webhdfs, I encountered a strange bug where I just cannot rename a > file if it has a space in the filename. > It seems strange to me, is there anything I am missing ? > > Edit: After some debugging, it seems to be linked with the way spaces are > encoded the webhdfs url: the JDK's URLEncoder uses '+' to encode spaces, > whereas a CURL command where the filename is encoded with '%20' for spaces > works just fine. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15221: Attachment: HDFS-15221-008.patch Status: Patch Available (was: Open) > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15221: Attachment: (was: HDFS-15221-008.patch) > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15221: Status: Open (was: Patch Available) > Add checking of effective filesystem during initializing storage locations > -- > > Key: HDFS-15221 > URL: https://issues.apache.org/jira/browse/HDFS-15221 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15221-002.patch, HDFS-15221-003.patch, > HDFS-15221-004.patch, HDFS-15221-005.patch, HDFS-15221-006.patch, > HDFS-15221-007.patch, HDFS-15221-008.patch, HDFS-15221.patch > > > We sometimes mount different disks for different storage types as the storage > location. It's important to check the volume is mounted rightly before > initializing storage locations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061470#comment-17061470 ] Hadoop QA commented on HDFS-15221: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 49s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 6s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}228m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHDFSAcl | | | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestMiniDFSCluster | | | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.TestAbandonBlock | | | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.TestRefreshBlockPlacementPolicy | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | |
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061467#comment-17061467 ] Hadoop QA commented on HDFS-15221: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 3s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 4m 27s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}226m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStorageStateRecovery | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.TestBlockMissingException | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.TestErasureCodingExerciseAPIs | | | hadoop.hdfs.TestParallelUnixDomainRead | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy | | | hadoop.hdfs.TestMiniDFSCluster | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | |
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061448#comment-17061448 ] Hadoop QA commented on HDFS-15196: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 6s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HDFS-15196 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12996983/HDFS-15196.008.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 70cf956912e4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8d63734 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28975/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28975/testReport/ | | Max. process+thread count | 2552 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28975/console | | Powered by | Apache Yetus 0.8.0
[jira] [Commented] (HDFS-15221) Add checking of effective filesystem during initializing storage locations
[ https://issues.apache.org/jira/browse/HDFS-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061437#comment-17061437 ] Hadoop QA commented on HDFS-15221: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 37s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 14s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}248m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HDFS-15221 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12996972/HDFS-15221-007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux e2a792f9151a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |