[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063771#comment-17063771 ] Takanobu Asanuma commented on HDFS-15214: - +1 on [^HDFS-15214.005.patch]. [~elgoiri] Could you review it again? > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch, HDFS-15214.005.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063636#comment-17063636 ] Fengnan Li commented on HDFS-15196: --- Thanks for the review [~ayushtkn] [~elgoiri] I have addressed comments. As for **_remainingEntries,_ since it is indicating entries from downstream clusters instead of routers, adding router entries will confuse clients. For example, when all listing from subclusters are done and routers append some entries to the total result making this counter greater than 0, clients see this non-zero counter and will issue an extra listing request to routers. This request is redundant since all of the results (either from namenodes or routers) have already returned to clients. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HDFS-15196: -- Attachment: HDFS-15196.010.patch > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063588#comment-17063588 ] hemanthboyina commented on HDFS-15214: -- thanks for the review [~elgoiri] [~tasanuma] {quote}The last two should be "m.get("snapshotDirectoryCount") != null" and "m.get("snapshotSpaceConsumed") != null"? {quote} You are correct , my bad i have corrected and uploaded new patch , please review > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch, HDFS-15214.005.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15214: - Attachment: HDFS-15214.005.patch > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch, HDFS-15214.005.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15230) Sanity check should not assume key base name can be derived from version name
[ https://issues.apache.org/jira/browse/HDFS-15230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063543#comment-17063543 ] Wei-Chiu Chuang commented on HDFS-15230: [~msingh] fyi > Sanity check should not assume key base name can be derived from version name > - > > Key: HDFS-15230 > URL: https://issues.apache.org/jira/browse/HDFS-15230 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Priority: Major > > HDFS-14884 checks if the encryption info of a file matches the encryption > zone key. > {code} > if (!KeyProviderCryptoExtension. > getBaseName(keyVersionName).equals(zoneKeyName)) { > throw new IllegalArgumentException(String.format( > "KeyVersion '%s' does not belong to the key '%s'", > keyVersionName, zoneKeyName)); > } > {code} > Here it assumes the "base name" can be derived from key version name, and > that the base name should be the same as zone key. > However, there is no published definition of what a key version name should > be. > While the code works for the builtin JKS key provider, it may not work for > other kind of key providers. (Specifically, it breaks Cloudera's KeyTrustee > KMS KeyProvider) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15230) Sanity check should not assume key base name can be derived from version name
Wei-Chiu Chuang created HDFS-15230: -- Summary: Sanity check should not assume key base name can be derived from version name Key: HDFS-15230 URL: https://issues.apache.org/jira/browse/HDFS-15230 Project: Hadoop HDFS Issue Type: Bug Reporter: Wei-Chiu Chuang HDFS-14884 checks if the encryption info of a file matches the encryption zone key. {code} if (!KeyProviderCryptoExtension. getBaseName(keyVersionName).equals(zoneKeyName)) { throw new IllegalArgumentException(String.format( "KeyVersion '%s' does not belong to the key '%s'", keyVersionName, zoneKeyName)); } {code} Here it assumes the "base name" can be derived from key version name, and that the base name should be the same as zone key. However, there is no published definition of what a key version name should be. While the code works for the builtin JKS key provider, it may not work for other kind of key providers. (Specifically, it breaks Cloudera's KeyTrustee KMS KeyProvider) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063511#comment-17063511 ] Takanobu Asanuma edited comment on HDFS-15214 at 3/20/20, 5:00 PM: --- In [^HDFS-15214.004.patch], there are three "if (m.get("snapshotFileCount") != null)"s. The last two should be "m.get("snapshotDirectoryCount") != null" and "m.get("snapshotSpaceConsumed") != null"? was (Author: tasanuma0829): In [^HDFS-15214.004.patch], there are multiple "if (m.get("snapshotFileCount") != null)". > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063511#comment-17063511 ] Takanobu Asanuma commented on HDFS-15214: - In [^HDFS-15214.004.patch], there are multiple "if (m.get("snapshotFileCount") != null)". > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063494#comment-17063494 ] Ayush Saxena commented on HDFS-15196: - Thanx Everyone. Seems fine. A minor doubt : Do we need to increment the {{remainingEntries}} count too, with number of mount entries remaining? {code:java} child.compareTo(lastName) < 0) || {code} Should this be =< since the mount entry is suppose to overwrite the entry for Namenodes? > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063473#comment-17063473 ] Íñigo Goiri edited comment on HDFS-15196 at 3/20/20, 3:56 PM: -- Not sure what's wrong with Yetus... [^HDFS-15196.009.patch] LGTM. I minor comment would be to do in MockResolver: {code} // a simplified version of MountTableResolver implementation for (String key : this.locations.keySet()) { if (key.startsWith(path)) { String child = key.substring(path.length()); if (child.length() > 0) { // only take children so remove parent path and / mounts.add(key.substring(path.length()+1)); } } } if (mounts.isEmpty()) { mounts = null; } {code} Which preserves part of the original code. [~ayushtkn] anything else in your side? was (Author: elgoiri): Not sure what's wrong with Yetus... [^HDFS-15196.009.patch] LGTM. I minor comment would be to do in MockResolver: {code} // a simplified version of MountTableResolver implementation for (String key : this.locations.keySet()) { if (key.startsWith(path)) { String child = key.substring(path.length()); if (child.length() > 0) { // only take children so remove parent path and / mounts.add(key.substring(path.length()+1)); } } } if (mounts.isEmpty()) { mounts = null; } {code} Which preserves part of the original code. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063473#comment-17063473 ] Íñigo Goiri commented on HDFS-15196: Not sure what's wrong with Yetus... [^HDFS-15196.009.patch] LGTM. I minor comment would be to do in MockResolver: {code} // a simplified version of MountTableResolver implementation for (String key : this.locations.keySet()) { if (key.startsWith(path)) { String child = key.substring(path.length()); if (child.length() > 0) { // only take children so remove parent path and / mounts.add(key.substring(path.length()+1)); } } } if (mounts.isEmpty()) { mounts = null; } {code} Which preserves part of the original code. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The fix is just to append the mount points when there is no more batch query > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063469#comment-17063469 ] Íñigo Goiri commented on HDFS-15214: [~tasanuma], isn't that what's in [^HDFS-15214.004.patch]? {code} 443 if (m.get("snapshotLength") != null) { 444 long snapshotLength = ((Number) m.get("snapshotLength")).longValue(); 445 builder.snapshotLength(snapshotLength); 446 } 447 if (m.get("snapshotFileCount") != null) { 448 long snapshotFileCount = 449 ((Number) m.get("snapshotFileCount")).longValue(); 450 builder.snapshotFileCount(snapshotFileCount); 451 } 452 if (m.get("snapshotFileCount") != null) { 453 long snapshotDirectoryCount = 454 ((Number) m.get("snapshotDirectoryCount")).longValue(); 455 builder.snapshotDirectoryCount(snapshotDirectoryCount); 456 } 457 if (m.get("snapshotFileCount") != null) { 458 long snapshotSpaceConsumed = 459 ((Number) m.get("snapshotSpaceConsumed")).longValue(); 460 builder.snapshotSpaceConsumed(snapshotSpaceConsumed); 461 } {code} > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15224) Add snapshot counts to content summary in https
[ https://issues.apache.org/jira/browse/HDFS-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri reassigned HDFS-15224: -- Assignee: Quan Li > Add snapshot counts to content summary in https > --- > > Key: HDFS-15224 > URL: https://issues.apache.org/jira/browse/HDFS-15224 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Quan Li >Assignee: Quan Li >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15224) Add snapshot counts to content summary in https
[ https://issues.apache.org/jira/browse/HDFS-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063413#comment-17063413 ] Quan Li commented on HDFS-15224: yes, it only does for webhdfs, rest calls can be for httpfs as well as webhdfs. > Add snapshot counts to content summary in https > --- > > Key: HDFS-15224 > URL: https://issues.apache.org/jira/browse/HDFS-15224 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Quan Li >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15180) DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
[ https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063403#comment-17063403 ] Xiaoqiao He commented on HDFS-15180: Try to trigger Jenkins. > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool. > --- > > Key: HDFS-15180 > URL: https://issues.apache.org/jira/browse/HDFS-15180 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: Aiphago >Priority: Major > Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, > image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, > image-2020-03-10-17-34-26-368.png > > > Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in > big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15180) DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
[ https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15180: --- Status: Patch Available (was: Open) > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool. > --- > > Key: HDFS-15180 > URL: https://issues.apache.org/jira/browse/HDFS-15180 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: Aiphago >Priority: Major > Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, > image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, > image-2020-03-10-17-34-26-368.png > > > Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in > big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15214) WebHDFS: Add snapshot counts to Content Summary
[ https://issues.apache.org/jira/browse/HDFS-15214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063400#comment-17063400 ] Takanobu Asanuma commented on HDFS-15214: - Thanks for the patch, [~hemanthboyina]. The if-clauses should be fixed as follows? {code:java} if (m.get("snapshotLength") != null) { ... } if (m.get("snapshotFileCount") != null) { ... } if (m.get("snapshotDirectoryCount") != null) { ... } if (m.get("snapshotSpaceConsumed") != null) { ... } {code} > WebHDFS: Add snapshot counts to Content Summary > --- > > Key: HDFS-15214 > URL: https://issues.apache.org/jira/browse/HDFS-15214 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15214.001.patch, HDFS-15214.002.patch, > HDFS-15214.003.patch, HDFS-15214.004.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15180) DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
[ https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063204#comment-17063204 ] Aiphago commented on HDFS-15180: Hi [~zhuqi], thanks for valuable suggestions. Change the lock style use try() without finally{}. Change transferReplicaForPipelineRecovery to read lock. Wait UT result.[^HDFS-15180.002.patch] > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool. > --- > > Key: HDFS-15180 > URL: https://issues.apache.org/jira/browse/HDFS-15180 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: Aiphago >Priority: Major > Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, > image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, > image-2020-03-10-17-34-26-368.png > > > Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in > big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15180) DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
[ https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aiphago updated HDFS-15180: --- Attachment: HDFS-15180.002.patch > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool. > --- > > Key: HDFS-15180 > URL: https://issues.apache.org/jira/browse/HDFS-15180 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: Aiphago >Priority: Major > Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, > image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, > image-2020-03-10-17-34-26-368.png > > > Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in > big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org