[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066889#comment-17066889 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for actually going through every iteration of this, much appreciated. Can you commit this for me? Thanks. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066391#comment-17066391 ] Siddharth Wagle commented on HDFS-15154: Thanks [~arp] for your feedback. [~ayushtkn] I have updated version 15 with the minor change to replace most of the CaseUtils calls with string literal for operation name. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.15.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064058#comment-17064058 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for the review. 14 => incorporated all the above comments from [~ayushtkn]. One additional change: Used CaseUtils.toCamelCase(), unfortunately, there is no reverse of this which would have been ideal. More readable, I think but I can change this back to defining string literals if reviewers feel otherwise. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.14.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:36 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try { } block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:35 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060685#comment-17060685 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch, I was expecting this to be a straightforward change but ended up figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:34 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch, I was expecting this to be a straightforward change but ended up figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: (was: HDFS-15154.13.patch) > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.13.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.13.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060600#comment-17060600 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 3:50 AM: -- We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, any comment [~arp], since the change to enum was a suggestion from you? was (Author: swagle): We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, It any comment [~arp], since the change to enum was a suggestion from you? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060600#comment-17060600 ] Siddharth Wagle commented on HDFS-15154: We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, It any comment [~arp], since the change to enum was a suggestion from you? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059775#comment-17059775 ] Siddharth Wagle commented on HDFS-15154: So, old configuration will work if the new configuration is not present. This situation arises when both old and new are present, new one takes default value and the old one is ignored. In a few tests that explicitly disable the policy, this results in failure unless we delete the new config as we do in TestStoragePolicyPermissionSettings#testStoragePolicyConfigDeprecation > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059764#comment-17059764 ] Siddharth Wagle edited comment on HDFS-15154 at 3/15/20, 6:49 PM: -- [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Note: We cannot use deprecated context here which replaces new config with old value because of type change, otherwise things would have worked without change. Agree with other refactor suggestion, will update the patch accordingly. was (Author: swagle): [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Agree with other refactor suggestion, will update the patch accordingly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059764#comment-17059764 ] Siddharth Wagle commented on HDFS-15154: [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Agree with other refactor suggestion, will update the patch accordingly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059478#comment-17059478 ] Siddharth Wagle commented on HDFS-15154: Rebased and re-uploaded v12 > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: (was: HDFS-15154.12.patch) > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.12.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.12.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059443#comment-17059443 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for the suggestion. Changes in version 12: - Moved all checks to FSNameSystem before writeLock is taken for set, unset and satisfy - Removed the config check from FSDirectory so config loaded only once - Removed check from package-private method(s) in FSDirAttrOp since FSNameSystem is only caller > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058203#comment-17058203 ] Siddharth Wagle commented on HDFS-15154: Test failures are unrelated. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058073#comment-17058073 ] Siddharth Wagle commented on HDFS-15154: 11 => 10 + Fixed checkstyle warning and UT failing due to exception text. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.11.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057597#comment-17057597 ] Siddharth Wagle commented on HDFS-15154: 10 => Updated the patch with changes suggested by [~hanishakoneru], changed the exception message to be simpler since we already print deprecation warning and have updated the hdfs-default.xml > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.10.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057501#comment-17057501 ] Siddharth Wagle commented on HDFS-15154: Since we are logging deprecation already, can we just change the warning to this: {noformat} Failed to change storage policy satisfier as storage policies have been disabled. {noformat} Rather than the cryptic message? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057494#comment-17057494 ] Siddharth Wagle commented on HDFS-15154: Thanks for the review [~hanishakoneru], I will make those changes > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047016#comment-17047016 ] Siddharth Wagle commented on HDFS-15154: 09 => rebased 08. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.09.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.08.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047005#comment-17047005 ] Siddharth Wagle commented on HDFS-15154: 08 => Explicity verified the deprecated config still takes effect in the absence of new config. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17045964#comment-17045964 ] Siddharth Wagle commented on HDFS-15154: Hi [~ayushtkn]/[~arp], what do you think about changes in the latest patch? Any concerns? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042340#comment-17042340 ] Siddharth Wagle commented on HDFS-15154: 07 => 06 + checkstyle fixed. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.07.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042192#comment-17042192 ] Siddharth Wagle commented on HDFS-15154: 06 => Instead of DeprecationContext handled the deprecation in the DFSUtil call to get settings. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.06.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042178#comment-17042178 ] Siddharth Wagle commented on HDFS-15154: Actually I will make the change to make sure we don't break compat. Updating patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042158#comment-17042158 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 9:10 PM: - I did not find other examples of config *type* changing, the deprecation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. was (Author: swagle): I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042158#comment-17042158 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 8:37 PM: - I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. was (Author: swagle): I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042158#comment-17042158 ] Siddharth Wagle commented on HDFS-15154: I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042127#comment-17042127 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 7:33 PM: - Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, and the new key is not in the configs. HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. was (Author: swagle): Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042127#comment-17042127 ] Siddharth Wagle commented on HDFS-15154: Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.05.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040323#comment-17040323 ] Siddharth Wagle commented on HDFS-15154: I realized that the deprecated key is now ignored, trying to figure out a clean way to hand deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040323#comment-17040323 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 6:38 PM: - I realized that the deprecated key is now ignored, trying to figure out a clean way to handle deprecation. was (Author: swagle): I realized that the deprecated key is now ignored, trying to figure out a clean way to hand deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 8:29 AM: - Looking into the test failures, most of them seem to fail due to OOM, retriggering. 04 => checkstyle fixes. was (Author: swagle): Looking into the test failures, most of them seem to fail due to OOM, retriggering. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.04.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 6:03 AM: - Looking into the test failures, most of them seem to fail due to OOM, retriggering. was (Author: swagle): Looking into the test failures, will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039720#comment-17039720 ] Siddharth Wagle commented on HDFS-15154: Looking into the test failure will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 5:59 AM: - Looking into the test failures, will update the patch shortly. was (Author: swagle): Looking into the test failure will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039645#comment-17039645 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 2:15 AM: - Patch version 03: Addressed review comments. Added a new test instead of modifying existing ones and verified all scenarios without starting a MiniDFS for each test case. was (Author: swagle): Patch version 03: Addressed review comments. Added a new test case instead of modifying existing ones and verified all scenarios without starting a MiniDFS for each test case. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039645#comment-17039645 ] Siddharth Wagle commented on HDFS-15154: Patch version 03: Addressed review comments. Added a new test case instead of modifying existing ones and verified all scenarios without starting a MiniDFS for each test case. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.03.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17033377#comment-17033377 ] Siddharth Wagle commented on HDFS-15154: Thanks for the reviews [~ayushtkn] and [~arp], I will make the suggested changes, just two points to clarify: - If not superuser, we do regular permission check hence I made the UT changes in the setup instead of method local, so that default path is verified by all the tests in TestHdfsAdmin. - Regarding [~arp] point, is "supergroup" synonymous with ADMINISTRATORS? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032757#comment-17032757 ] Siddharth Wagle edited comment on HDFS-15154 at 2/8/20 12:03 AM: - Configs UT and Whitespace fix in 02. was (Author: swagle): Whitespace fix in 02. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032757#comment-17032757 ] Siddharth Wagle commented on HDFS-15154: Whitespace fix in 02. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.02.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Status: Patch Available (was: Open) Allow superuser only based on config setting. cc:[~arp] for review. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.01.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDFS-15154: -- Assignee: Siddharth Wagle > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14668) Support Fuse with Users from multiple Security Realms
[ https://issues.apache.org/jira/browse/HDFS-14668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979471#comment-16979471 ] Siddharth Wagle edited comment on HDFS-14668 at 11/25/19 5:39 PM: -- + [~arpaga] and [~xyao] for comments. was (Author: swagle): + [~arpaga] for comments. > Support Fuse with Users from multiple Security Realms > - > > Key: HDFS-14668 > URL: https://issues.apache.org/jira/browse/HDFS-14668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fuse-dfs >Reporter: Sailesh Patel >Assignee: Istvan Fajth >Priority: Minor > > Users from non-default krb5 domain can't use hadoop-fuse. > There are 2 Realms with kdc. > -one realm is for human users (USERS.COM.US) > -the other is for service principals. (SERVICE.COM.US) > Cross realm trust is setup. > In krb5.conf the default domain is set to SERVICE.COM.US > Users within USERS.COM.US Realm are not able to put any files to Fuse mounted > location > The client shows: > cp: cannot create regular file ‘/hdfs_mount/tmp/hello_from_fuse.txt’: > Input/output error -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14668) Support Fuse with Users from multiple Security Realms
[ https://issues.apache.org/jira/browse/HDFS-14668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979471#comment-16979471 ] Siddharth Wagle commented on HDFS-14668: + [~arpaga] for comments. > Support Fuse with Users from multiple Security Realms > - > > Key: HDFS-14668 > URL: https://issues.apache.org/jira/browse/HDFS-14668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fuse-dfs >Reporter: Sailesh Patel >Assignee: Istvan Fajth >Priority: Minor > > Users from non-default krb5 domain can't use hadoop-fuse. > There are 2 Realms with kdc. > -one realm is for human users (USERS.COM.US) > -the other is for service principals. (SERVICE.COM.US) > Cross realm trust is setup. > In krb5.conf the default domain is set to SERVICE.COM.US > Users within USERS.COM.US Realm are not able to put any files to Fuse mounted > location > The client shows: > cp: cannot create regular file ‘/hdfs_mount/tmp/hello_from_fuse.txt’: > Input/output error -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2590) Integration tests for Recon with Ozone Manager.
[ https://issues.apache.org/jira/browse/HDDS-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2590: - Assignee: Shweta > Integration tests for Recon with Ozone Manager. > --- > > Key: HDDS-2590 > URL: https://issues.apache.org/jira/browse/HDDS-2590 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Shweta >Priority: Major > Fix For: 0.5.0 > > > Currently, Recon has only unit tests. We need to add the following > integration tests to make sure there are no regressions or contract breakage > with Ozone Manager. > The first step would be to add Recon as a new component to Mini Ozone cluster. > * *Test 1* - *Verify Recon can get full snapshot and subsequent delta updates > from Ozone Manager on startup.* > > Start up a Mini Ozone cluster (with Recon) with a few keys in OM. > > Verify Recon gets full DB snapshot from OM. > > Add 100 keys to OM > > Verify Recon picks up the new keys using the delta updates mechanism. > > Verify OM DB seq number == Recon's OM DB snapshot's seq number > * *Test 2* - *Verify Recon restart does not cause issues with the OM DB > syncing.* >> Startup Mini Ozone cluster (with Recon). >> Add 100 keys to OM >> Verify Recon picks up the new keys. >> Stop Recon Server >> Add 5 keys to OM. >> Start Recon Server >> Verify that Recon Server does not request full snapshot from OM (since > only a small >number of keys have been added, and hence Recon should be able to get > the >updates alone) >> Verify OM DB seq number == Recon's OM DB snapshot's seq number > *Note* : This exercise might expose a few bugs in Recon-OM integration which > is perfectly normal and is the exact reason why we want these tests to be > written. Please file JIRAs for any major issues encountered and link them > here. Minor issues can hopefully be fixed as part of this effort. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands
[ https://issues.apache.org/jira/browse/HDDS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977681#comment-16977681 ] Siddharth Wagle edited comment on HDDS-2034 at 11/19/19 6:19 PM: - Hi [~Sammi], I believe HDDS-2497 is related to the change since we enabled pipeline check for safemode exit. For 1 node RATIS, the HealthyPipelineSafeModeRule.process does not check for 1 node pipeline. It should be an easy fix to go along with this patch, assigning that Jira to you. Thanks. was (Author: swagle): Hi [~Sammi], I believe HDDS-2497 is related to the change since once we enable pipeline check for safemode exit. For 1 node RATIS, the HealthyPipelineSafeModeRule.process does not check for 1 node pipeline. It should be an easy fix to go along with this patch, assigning that Jira to you. Thanks. > Async RATIS pipeline creation and destroy through heartbeat commands > > > Key: HDDS-2034 > URL: https://issues.apache.org/jira/browse/HDDS-2034 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Sammi Chen >Assignee: Sammi Chen >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 13.5h > Remaining Estimate: 0h > > Currently, pipeline creation and destroy are synchronous operations. SCM > directly connect to each datanode of the pipeline through gRPC channel to > create the pipeline to destroy the pipeline. > This task is to remove the gRPC channel, send pipeline creation and destroy > action through heartbeat command to each datanode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands
[ https://issues.apache.org/jira/browse/HDDS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977681#comment-16977681 ] Siddharth Wagle edited comment on HDDS-2034 at 11/19/19 6:18 PM: - Hi [~Sammi], I believe HDDS-2497 is related to the change since once we enable pipeline check for safemode exit. For 1 node RATIS, the HealthyPipelineSafeModeRule.process does not check for 1 node pipeline. It should be an easy fix to go along with this patch, assigning that Jira to you. Thanks. was (Author: swagle): Hi [~Sammi], I believe HDDS-2497 is related to the change since, for 1 node RATIS, the HealthyPipelineSafeModeRule.process does not check for 1 node pipeline. It should be an easy fix to go along with this patch, assigning that Jira to you. Thanks. > Async RATIS pipeline creation and destroy through heartbeat commands > > > Key: HDDS-2034 > URL: https://issues.apache.org/jira/browse/HDDS-2034 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Sammi Chen >Assignee: Sammi Chen >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 13.5h > Remaining Estimate: 0h > > Currently, pipeline creation and destroy are synchronous operations. SCM > directly connect to each datanode of the pipeline through gRPC channel to > create the pipeline to destroy the pipeline. > This task is to remove the gRPC channel, send pipeline creation and destroy > action through heartbeat command to each datanode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands
[ https://issues.apache.org/jira/browse/HDDS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977681#comment-16977681 ] Siddharth Wagle commented on HDDS-2034: --- Hi [~Sammi], I believe HDDS-2497 is related to the change since, for 1 node RATIS, the HealthyPipelineSafeModeRule.process does not check for 1 node pipeline. It should be an easy fix to go along with this patch, assigning that Jira to you. Thanks. > Async RATIS pipeline creation and destroy through heartbeat commands > > > Key: HDDS-2034 > URL: https://issues.apache.org/jira/browse/HDDS-2034 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Sammi Chen >Assignee: Sammi Chen >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 13.5h > Remaining Estimate: 0h > > Currently, pipeline creation and destroy are synchronous operations. SCM > directly connect to each datanode of the pipeline through gRPC channel to > create the pipeline to destroy the pipeline. > This task is to remove the gRPC channel, send pipeline creation and destroy > action through heartbeat command to each datanode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2497) SafeMode check should allow key creation on single node pipeline when replication factor is 1
[ https://issues.apache.org/jira/browse/HDDS-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2497: - Assignee: Sammi Chen (was: Siddharth Wagle) > SafeMode check should allow key creation on single node pipeline when > replication factor is 1 > - > > Key: HDDS-2497 > URL: https://issues.apache.org/jira/browse/HDDS-2497 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Sammi Chen >Priority: Major > > Start a single datanode ozone docker-compose with replication factor of 1. > {code:java} > OZONE-SITE.XML_ozone.replication=1{code} > The key creation failed with Safemode exception below. > {code:java} > >$ docker-compose exec om bash > bash-4.2$ ozone sh vol create /vol1 > bash-4.2$ ozone sh bucket create /vol1/bucket1 > ozone sh kbash-4.2$ ozone sh key put /vol1/bucket1/key1 README.md > SCM_IN_SAFE_MODE SafeModePrecheck failed for allocateBlock{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2539) Sonar: Fix synchronization issues in SCMContainerManager class
Siddharth Wagle created HDDS-2539: - Summary: Sonar: Fix synchronization issues in SCMContainerManager class Key: HDDS-2539 URL: https://issues.apache.org/jira/browse/HDDS-2539 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-sEKcVY8lQ4ZsDQ=false=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser package
[ https://issues.apache.org/jira/browse/HDDS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2538: -- Labels: pull-request-available sonar (was: pull-request-available) > Sonar: Fix issues found in DatabaseHelper in ozone audit parser package > --- > > Key: HDDS-2538 > URL: https://issues.apache.org/jira/browse/HDDS-2538 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Tools >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 10m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-dWKcVY8lQ4Zr39=false=BLOCKER=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser package
[ https://issues.apache.org/jira/browse/HDDS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2538: -- Summary: Sonar: Fix issues found in DatabaseHelper in ozone audit parser package (was: Sonar: Fix issues found in DatabaseHelper in ozone audit parser) > Sonar: Fix issues found in DatabaseHelper in ozone audit parser package > --- > > Key: HDDS-2538 > URL: https://issues.apache.org/jira/browse/HDDS-2538 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Tools >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-dWKcVY8lQ4Zr39=false=BLOCKER=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser
Siddharth Wagle created HDDS-2538: - Summary: Sonar: Fix issues found in DatabaseHelper in ozone audit parser Key: HDDS-2538 URL: https://issues.apache.org/jira/browse/HDDS-2538 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Tools Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-dWKcVY8lQ4Zr39=false=BLOCKER=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2502) Close ScmClient in RatisInsight
[ https://issues.apache.org/jira/browse/HDDS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2502: - Assignee: Siddharth Wagle > Close ScmClient in RatisInsight > --- > > Key: HDDS-2502 > URL: https://issues.apache.org/jira/browse/HDDS-2502 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Assignee: Siddharth Wagle >Priority: Major > Labels: sonar > > {{ScmClient}} in {{RatisInsight}} should be closed after use. > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mYKcVY8lQ4Zr_s=AW5md-mYKcVY8lQ4Zr_s > Also two other minor issues reported in the same file: > https://sonarcloud.io/project/issues?fileUuids=AW5md-HeKcVY8lQ4ZrXL=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2501) Sonar: Fix issues found in the ObjectEndpoint class
[ https://issues.apache.org/jira/browse/HDDS-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2501: -- Summary: Sonar: Fix issues found in the ObjectEndpoint class (was: Ensure stream is closed in ObjectEndpoint) > Sonar: Fix issues found in the ObjectEndpoint class > --- > > Key: HDDS-2501 > URL: https://issues.apache.org/jira/browse/HDDS-2501 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: S3 >Reporter: Attila Doroszlai >Assignee: Siddharth Wagle >Priority: Major > Labels: sonar > > Ensure {{ObjectOutputStream}} is closed in {{ObjectEndpoint}}: > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-j-KcVY8lQ4Zr96=AW5md-j-KcVY8lQ4Zr96 > And fix other issues in the same file: > https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrVc=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2501) Ensure stream is closed in ObjectEndpoint
[ https://issues.apache.org/jira/browse/HDDS-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2501: - Assignee: Siddharth Wagle > Ensure stream is closed in ObjectEndpoint > - > > Key: HDDS-2501 > URL: https://issues.apache.org/jira/browse/HDDS-2501 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: S3 >Reporter: Attila Doroszlai >Assignee: Siddharth Wagle >Priority: Major > Labels: sonar > > Ensure {{ObjectOutputStream}} is closed in {{ObjectEndpoint}}: > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-j-KcVY8lQ4Zr96=AW5md-j-KcVY8lQ4Zr96 > And fix other issues in the same file: > https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrVc=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-14980) diskbalancer query command always tries to contact to port 9867
[ https://issues.apache.org/jira/browse/HDFS-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle resolved HDFS-14980. Resolution: Not A Problem This is an issue specific to a HDFS deployment using Cloudera Manager. The client side configuration in /etc/hadoop/conf/ (hdfs-site.xml), excludes all daemon configs and so DiskBalancerCli cannot resolve the {{dfs.datanode.ipc.address}}. If you add this to the configuration file, the query command works as expected. > diskbalancer query command always tries to contact to port 9867 > --- > > Key: HDFS-14980 > URL: https://issues.apache.org/jira/browse/HDFS-14980 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Reporter: Nilotpal Nandi >Assignee: Siddharth Wagle >Priority: Major > > disbalancer query commands always tries to connect to port 9867 even when > datanode IPC port is different. > In this setup , datanode IPC port is set to 20001. > > diskbalancer report command works fine and connects to IPC port 20001 > > {noformat} > hdfs diskbalancer -report -node 172.27.131.193 > 19/11/12 08:58:55 INFO command.Command: Processing report command > 19/11/12 08:58:57 INFO balancer.KeyManager: Block token params received from > NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec > 19/11/12 08:58:57 INFO block.BlockTokenSecretManager: Setting block keys > 19/11/12 08:58:57 INFO balancer.KeyManager: Update block keys every 2hrs, > 30mins, 0sec > 19/11/12 08:58:58 INFO command.Command: Reporting volume information for > DataNode(s). These DataNode(s) are parsed from '172.27.131.193'. > Processing report command > Reporting volume information for DataNode(s). These DataNode(s) are parsed > from '172.27.131.193'. > [172.27.131.193:20001] - : 3 > volumes with node data density 0.05. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK1/] - 0.15 used: > 39343871181/259692498944, 0.85 free: 220348627763/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK2/] - 0.15 used: > 39371179986/259692498944, 0.85 free: 220321318958/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/dn/] - 0.19 used: > 49934903670/259692498944, 0.81 free: 209757595274/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > > {noformat} > > But diskbalancer query command fails and tries to connect to port 9867 > (default port). > > {noformat} > hdfs diskbalancer -query 172.27.131.193 > 19/11/12 06:37:15 INFO command.Command: Executing "query plan" command. > 19/11/12 06:37:16 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 0 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > 19/11/12 06:37:17 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 1 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > .. > .. > .. > 19/11/12 06:37:25 ERROR tools.DiskBalancerCLI: Exception thrown while running > DiskBalancerCLI. > {noformat} > > > Expectation : > diskbalancer query command should work fine without explicitly mentioning > datanode IPC port address -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2497) SafeMode check should allow key creation on single node pipeline when replication factor is 1
[ https://issues.apache.org/jira/browse/HDDS-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2497: - Assignee: Siddharth Wagle > SafeMode check should allow key creation on single node pipeline when > replication factor is 1 > - > > Key: HDDS-2497 > URL: https://issues.apache.org/jira/browse/HDDS-2497 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Major > > Start a single datanode ozone docker-compose with replication factor of 1. > {code:java} > OZONE-SITE.XML_ozone.replication=1{code} > The key creation failed with Safemode exception below. > {code:java} > >$ docker-compose exec om bash > bash-4.2$ ozone sh vol create /vol1 > bash-4.2$ ozone sh bucket create /vol1/bucket1 > ozone sh kbash-4.2$ ozone sh key put /vol1/bucket1/key1 README.md > SCM_IN_SAFE_MODE SafeModePrecheck failed for allocateBlock{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2498) Sonar: Fix issues found in StorageContainerManager class
Siddharth Wagle created HDDS-2498: - Summary: Sonar: Fix issues found in StorageContainerManager class Key: HDDS-2498 URL: https://issues.apache.org/jira/browse/HDDS-2498 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 https://sonarcloud.io/project/issues?fileUuids=AW5md-HfKcVY8lQ4ZrcG=hadoop-ozone=AW5md-tIKcVY8lQ4ZsEr=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2493) Sonar: Locking on a parameter in NetUtils.removeOutscope
Siddharth Wagle created HDDS-2493: - Summary: Sonar: Locking on a parameter in NetUtils.removeOutscope Key: HDDS-2493 URL: https://issues.apache.org/jira/browse/HDDS-2493 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2hKcVY8lQ4ZsNd=false=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2489) Sonar: Anonymous class based initialization in HddsClientUtils
Siddharth Wagle created HDDS-2489: - Summary: Sonar: Anonymous class based initialization in HddsClientUtils Key: HDDS-2489 URL: https://issues.apache.org/jira/browse/HDDS-2489 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Client Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=false=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14980) diskbalancer query command always tries to contact to port 9867
[ https://issues.apache.org/jira/browse/HDFS-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDFS-14980: -- Assignee: Siddharth Wagle > diskbalancer query command always tries to contact to port 9867 > --- > > Key: HDFS-14980 > URL: https://issues.apache.org/jira/browse/HDFS-14980 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Reporter: Nilotpal Nandi >Assignee: Siddharth Wagle >Priority: Major > > disbalancer query commands always tries to connect to port 9867 even when > datanode IPC port is different. > In this setup , datanode IPC port is set to 20001. > > diskbalancer report command works fine and connects to IPC port 20001 > > {noformat} > hdfs diskbalancer -report -node 172.27.131.193 > 19/11/12 08:58:55 INFO command.Command: Processing report command > 19/11/12 08:58:57 INFO balancer.KeyManager: Block token params received from > NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec > 19/11/12 08:58:57 INFO block.BlockTokenSecretManager: Setting block keys > 19/11/12 08:58:57 INFO balancer.KeyManager: Update block keys every 2hrs, > 30mins, 0sec > 19/11/12 08:58:58 INFO command.Command: Reporting volume information for > DataNode(s). These DataNode(s) are parsed from '172.27.131.193'. > Processing report command > Reporting volume information for DataNode(s). These DataNode(s) are parsed > from '172.27.131.193'. > [172.27.131.193:20001] - : 3 > volumes with node data density 0.05. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK1/] - 0.15 used: > 39343871181/259692498944, 0.85 free: 220348627763/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK2/] - 0.15 used: > 39371179986/259692498944, 0.85 free: 220321318958/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/dn/] - 0.19 used: > 49934903670/259692498944, 0.81 free: 209757595274/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > > {noformat} > > But diskbalancer query command fails and tries to connect to port 9867 > (default port). > > {noformat} > hdfs diskbalancer -query 172.27.131.193 > 19/11/12 06:37:15 INFO command.Command: Executing "query plan" command. > 19/11/12 06:37:16 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 0 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > 19/11/12 06:37:17 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 1 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > .. > .. > .. > 19/11/12 06:37:25 ERROR tools.DiskBalancerCLI: Exception thrown while running > DiskBalancerCLI. > {noformat} > > > Expectation : > diskbalancer query command should work fine without explicitly mentioning > datanode IPC port address -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2382) Consider reducing number of file::exists() calls during write operation
[ https://issues.apache.org/jira/browse/HDDS-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2382: - Assignee: Aravindan Vijayan (was: Siddharth Wagle) We do need to verify the behavior if the chunksPath is deleted. One way is to fail late and make sure the behavior is consistent. > Consider reducing number of file::exists() calls during write operation > --- > > Key: HDDS-2382 > URL: https://issues.apache.org/jira/browse/HDDS-2382 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Rajesh Balamohan >Assignee: Aravindan Vijayan >Priority: Major > Labels: performance > > When writing 100-200 MB files with multiple threads, observed lots of > {{[file::exists(])}} checks. > For every 16 MB chunk, it ends up checking whether {{chunksLoc}} directory > exists or not. (ref: > [https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java#L239]) > Also, this check ({{ChunkUtils.getChunkFile}}) happens from 2 places. > 1.org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk > 2.org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction > Note that these are folders and not actual chunk filenames. It would be > helpful to reduce this check, if we track create/delete of these folders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2382) Consider reducing number of file::exists() calls during write operation
[ https://issues.apache.org/jira/browse/HDDS-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2382: - Assignee: Siddharth Wagle > Consider reducing number of file::exists() calls during write operation > --- > > Key: HDDS-2382 > URL: https://issues.apache.org/jira/browse/HDDS-2382 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Rajesh Balamohan >Assignee: Siddharth Wagle >Priority: Major > Labels: performance > > When writing 100-200 MB files with multiple threads, observed lots of > {{[file::exists(])}} checks. > For every 16 MB chunk, it ends up checking whether {{chunksLoc}} directory > exists or not. (ref: > [https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java#L239]) > Also, this check ({{ChunkUtils.getChunkFile}}) happens from 2 places. > 1.org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk > 2.org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction > Note that these are folders and not actual chunk filenames. It would be > helpful to reduce this check, if we track create/delete of these folders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2366) Remove ozone.enabled flag
[ https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2366: - Assignee: Siddharth Wagle > Remove ozone.enabled flag > - > > Key: HDDS-2366 > URL: https://issues.apache.org/jira/browse/HDDS-2366 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Siddharth Wagle >Priority: Major > Labels: newbie > > Now when ozone is started the start-ozone.sh/stop-ozone.sh script check > whether this property is enabled or not to start ozone services. Now, this > property and this check can be removed. > > This was needed when ozone is part of Hadoop, and we don't want to start > ozone services by default. Now there is no such requirement. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-2350) NullPointerException seen in datanode log while writing data
[ https://issues.apache.org/jira/browse/HDDS-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957978#comment-16957978 ] Siddharth Wagle edited comment on HDDS-2350 at 10/23/19 3:45 PM: - Hi [~nilotpalnandi] thanks for reporting this, however, this was already fixed with https://issues.apache.org/jira/browse/RATIS-717 This was included HDDS-2340, can you verify if your git hash for this test does include HDDS-2340? was (Author: swagle): Hi [~nilotpalnandi] thanks for reporting this, however, this was already fixed with https://issues.apache.org/jira/browse/RATIS-717 This was included HDDS-2340, can you verify if your git hash does include HDDS-2340? > NullPointerException seen in datanode log while writing data > > > Key: HDDS-2350 > URL: https://issues.apache.org/jira/browse/HDDS-2350 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Priority: Major > > NullPointerException exception seen in datanode log while writing 10GB data. > There is one pipelinee with factor 3 while writing data. > {noformat} > 2019-10-23 11:25:45,674 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > 2019-10-23 11:25:55,673 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > 2019-10-23 11:26:05,674 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HDDS-2350) NullPointerException seen in datanode log while writing data
[ https://issues.apache.org/jira/browse/HDDS-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957978#comment-16957978 ] Siddharth Wagle commented on HDDS-2350: --- Hi [~nilotpalnandi] thanks for reporting this, however, this was already fixed with https://issues.apache.org/jira/browse/RATIS-717 This was included HDDS-2340, can you verify if your git hash does include HDDS-2340? > NullPointerException seen in datanode log while writing data > > > Key: HDDS-2350 > URL: https://issues.apache.org/jira/browse/HDDS-2350 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Priority: Major > > NullPointerException exception seen in datanode log while writing 10GB data. > There is one pipelinee with factor 3 while writing data. > {noformat} > 2019-10-23 11:25:45,674 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > 2019-10-23 11:25:55,673 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > 2019-10-23 11:26:05,674 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source > ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2 > java.lang.NullPointerException > at > org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239) > at > com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines
[ https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956364#comment-16956364 ] Siddharth Wagle edited comment on HDDS-1574 at 10/21/19 6:28 PM: - {noformat} Do you mean we don't want 2 pipelines to share the exactly same group of datanodes as members? {noformat} Yes, those were my initial thoughts on that matter. With the constraint of raft log per group on separate disks and the fact that we randomly choose a rack-local node should mean this constraint should already be satisfied, however it could be enforced as a part of this jira. Moreover, the same nodes participating in more than one group does not solve _pipeline availability_. The only reason to have such a setup is if Ozone cannot saturate disk bandwidth with one group for the datanodes (assuming traditionally we saturate disk before network). So, I think this constraint is still relevant, we could make enforcement configurable (on by default). We could of course punt this until rest of the multi-raft stuff is implemented. was (Author: swagle): {noformat} Do you mean we don't want 2 pipelines to share the exactly same group of datanodes as members? {noformat} Yes, those were my initial thoughts on that matter. With the constraint of raft log per group on separate disks and the fact that we randomly choose a rack-local node should mean this constraint should already be satisfied, however it could be enforced as a part of this jira. Moreover, the same nodes participating in more than one group does not solve _pipeline availability_. The only reason to have such a setup is if Ozone cannot saturate disk bandwidth with one group (assuming traditionally we saturate disk before network). So, I think this constraint is still relevant, we could of course punt this until rest of the multi-raft stuff is implemented. > Ensure same datanodes are not a part of multiple pipelines > -- > > Key: HDDS-1574 > URL: https://issues.apache.org/jira/browse/HDDS-1574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > Details in design doc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines
[ https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956364#comment-16956364 ] Siddharth Wagle commented on HDDS-1574: --- {noformat} Do you mean we don't want 2 pipelines to share the exactly same group of datanodes as members? {noformat} Yes, those were my initial thoughts on that matter. With the constraint of raft log per group on separate disks and the fact that we randomly choose a rack-local node should mean this constraint should already be satisfied, however it could be enforced as a part of this jira. Moreover, the same nodes participating in more than one group does not solve _pipeline availability_. The only reason to have such a setup is if Ozone cannot saturate disk bandwidth with one group (assuming traditionally we saturate disk before network). So, I think this constraint is still relevant, we could of course punt this until rest of the multi-raft stuff is implemented. > Ensure same datanodes are not a part of multiple pipelines > -- > > Key: HDDS-1574 > URL: https://issues.apache.org/jira/browse/HDDS-1574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > Details in design doc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2340) Update RATIS snapshot version
Siddharth Wagle created HDDS-2340: - Summary: Update RATIS snapshot version Key: HDDS-2340 URL: https://issues.apache.org/jira/browse/HDDS-2340 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: build Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 Update RATIS version to incorporate fix that went into RATIS-707 among others. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2323) Mem allocation: Optimise AuditMessage::build()
[ https://issues.apache.org/jira/browse/HDDS-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955106#comment-16955106 ] Siddharth Wagle commented on HDDS-2323: --- Thanks for point out [~elek], I will certainly do that. > Mem allocation: Optimise AuditMessage::build() > -- > > Key: HDDS-2323 > URL: https://issues.apache.org/jira/browse/HDDS-2323 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Rajesh Balamohan >Assignee: Siddharth Wagle >Priority: Major > Labels: performance > Fix For: 0.5.0 > > Attachments: HDDS-2323.01.patch, Screenshot 2019-10-18 at 8.24.52 > AM.png > > > String format allocates/processes more than > {color:#00}OzoneAclUtil.fromProtobuf in write benchmark.{color} > {color:#00}Would be good to use + instead of format.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2323) Mem allocation: Optimise AuditMessage::build()
[ https://issues.apache.org/jira/browse/HDDS-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954853#comment-16954853 ] Siddharth Wagle commented on HDDS-2323: --- Hey [~aengineer]/[~dineshchitlangia], since a change was trivial, I verified using UT and attached a patch, can create PR as well if needed. > Mem allocation: Optimise AuditMessage::build() > -- > > Key: HDDS-2323 > URL: https://issues.apache.org/jira/browse/HDDS-2323 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Rajesh Balamohan >Assignee: Siddharth Wagle >Priority: Major > Labels: performance > Attachments: HDDS-2323.01.patch, Screenshot 2019-10-18 at 8.24.52 > AM.png > > > String format allocates/processes more than > {color:#00}OzoneAclUtil.fromProtobuf in write benchmark.{color} > {color:#00}Would be good to use + instead of format.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2323) Mem allocation: Optimise AuditMessage::build()
[ https://issues.apache.org/jira/browse/HDDS-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2323: -- Status: Patch Available (was: Open) > Mem allocation: Optimise AuditMessage::build() > -- > > Key: HDDS-2323 > URL: https://issues.apache.org/jira/browse/HDDS-2323 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Rajesh Balamohan >Assignee: Siddharth Wagle >Priority: Major > Labels: performance > Attachments: HDDS-2323.01.patch, Screenshot 2019-10-18 at 8.24.52 > AM.png > > > String format allocates/processes more than > {color:#00}OzoneAclUtil.fromProtobuf in write benchmark.{color} > {color:#00}Would be good to use + instead of format.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2323) Mem allocation: Optimise AuditMessage::build()
[ https://issues.apache.org/jira/browse/HDDS-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2323: -- Attachment: HDDS-2323.01.patch > Mem allocation: Optimise AuditMessage::build() > -- > > Key: HDDS-2323 > URL: https://issues.apache.org/jira/browse/HDDS-2323 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Rajesh Balamohan >Priority: Major > Labels: performance > Attachments: HDDS-2323.01.patch, Screenshot 2019-10-18 at 8.24.52 > AM.png > > > String format allocates/processes more than > {color:#00}OzoneAclUtil.fromProtobuf in write benchmark.{color} > {color:#00}Would be good to use + instead of format.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2323) Mem allocation: Optimise AuditMessage::build()
[ https://issues.apache.org/jira/browse/HDDS-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2323: - Assignee: Siddharth Wagle > Mem allocation: Optimise AuditMessage::build() > -- > > Key: HDDS-2323 > URL: https://issues.apache.org/jira/browse/HDDS-2323 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Rajesh Balamohan >Assignee: Siddharth Wagle >Priority: Major > Labels: performance > Attachments: HDDS-2323.01.patch, Screenshot 2019-10-18 at 8.24.52 > AM.png > > > String format allocates/processes more than > {color:#00}OzoneAclUtil.fromProtobuf in write benchmark.{color} > {color:#00}Would be good to use + instead of format.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-2283) Container Creation on datanodes take around 300ms due to rocksdb creation
[ https://issues.apache.org/jira/browse/HDDS-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953232#comment-16953232 ] Siddharth Wagle edited comment on HDDS-2283 at 10/16/19 10:22 PM: -- [~aengineer] Yes the follow-up Jira will not be blindly taken up without figuring out if 10s of RocksDBs sharing a disk vs 1 RocksDB with 10 tables in a single RocksDB which one is better/worse. I took this up as a low hanging fruit, agree with not focusing on micro-benchmarks comment. This was just a curiosity / exploratory effort from me that took all of 20 mins including the fix so went ahead with the patch. was (Author: swagle): [~aengineer] Yes the follow-up Jira will not be blindly taken up without figuring out if 10s of RocksDBs sharing a disk vs 1 RocksDB with 10 tables in a single RocksDB which one is better/worse. I took this up as a low hanging fruit, agree with not focusing on micro-benchmarks comment. This was just a curiosity / exploratory effort from me that took all of 20 mins including the fix. > Container Creation on datanodes take around 300ms due to rocksdb creation > - > > Key: HDDS-2283 > URL: https://issues.apache.org/jira/browse/HDDS-2283 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available > Attachments: HDDS-2283.00.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Container Creation on datanodes take around 300ms due to rocksdb creation. > Rocksdb creation is taking a considerable time and this needs to be optimized. > Creating a rocksdb per disk should be enough and each container can be table > inside the rocksdb. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2283) Container Creation on datanodes take around 300ms due to rocksdb creation
[ https://issues.apache.org/jira/browse/HDDS-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953232#comment-16953232 ] Siddharth Wagle commented on HDDS-2283: --- [~aengineer] Yes the follow-up Jira will not be blindly taken up without figuring out if 10s of RocksDBs sharing a disk vs 1 RocksDB with 10 tables in a single RocksDB which one is better/worse. I took this up as a low hanging fruit, agree with not focusing on micro-benchmarks comment. This was just a curiosity / exploratory effort from me that took all of 20 mins including the fix. > Container Creation on datanodes take around 300ms due to rocksdb creation > - > > Key: HDDS-2283 > URL: https://issues.apache.org/jira/browse/HDDS-2283 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available > Attachments: HDDS-2283.00.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Container Creation on datanodes take around 300ms due to rocksdb creation. > Rocksdb creation is taking a considerable time and this needs to be optimized. > Creating a rocksdb per disk should be enough and each container can be table > inside the rocksdb. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2317) Change rocksDB per Container model to have table per container on RocksDb per disk
[ https://issues.apache.org/jira/browse/HDDS-2317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-2317: -- Fix Version/s: (was: 0.5.0) > Change rocksDB per Container model to have table per container on RocksDb per > disk > -- > > Key: HDDS-2317 > URL: https://issues.apache.org/jira/browse/HDDS-2317 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Priority: Major > > Idea proposed by [~msingh] in HDDS-2283. > Better utilize disk bandwidth by having Rocks DB per disk and put containers > as tables inside. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2317) Change rocksDB per Container model to have table per container on RocksDb per disk
[ https://issues.apache.org/jira/browse/HDDS-2317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2317: - Assignee: (was: Siddharth Wagle) > Change rocksDB per Container model to have table per container on RocksDb per > disk > -- > > Key: HDDS-2317 > URL: https://issues.apache.org/jira/browse/HDDS-2317 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > > Idea proposed by [~msingh] in HDDS-2283. > Better utilize disk bandwidth by having Rocks DB per disk and put containers > as tables inside. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2317) Change rocksDB per Container model to have table per container on RocksDb per disk
Siddharth Wagle created HDDS-2317: - Summary: Change rocksDB per Container model to have table per container on RocksDb per disk Key: HDDS-2317 URL: https://issues.apache.org/jira/browse/HDDS-2317 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 0.5.0 Idea proposed by [~msingh] in HDDS-2283. Better utilize disk bandwidth by having Rocks DB per disk and put containers as tables inside. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org