[jira] [Comment Edited] (HDFS-15248) Make the maximum number of ACLs entries configurable

2020-03-29 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070251#comment-17070251
 ] 

Wei-Chiu Chuang edited comment on HDFS-15248 at 3/30/20, 1:56 AM:
--

Thanks for offering the patch!

I've had customers asking for extending ACL entry limit before. I'm not sure 
why 32, but here are a few reasons why it's probably not a good idea to extend 
it further

(1) manageability. once you have more than a dozen ACLs per file, it becomes 
hard to manage, error-prone.
(2) NameNode heap size. Especially in a large cluster with hundreds of millions 
of files, each inode occupies more bytes of heap. The memory pressure becomes 
even worse.
(3) serialization cost. We currently serialize the files under a directory to a 
protobuf message, which is limited to 64mb (default), and as the result we 
limit the max number of files per directory to 1 million. Allowing more ACL 
entries per file means more serialized bytes per file, and you may run into the 
protobuf message limit for a large directory well before 1 million files.

For these reasons I usually recommend users to use external authorization 
providers like Sentry or Ranger to delegate the authorization work to a 
separate entity.


was (Author: jojochuang):
Thanks for offering the patch!

I've had customers asking for extending ACL entry limit before. I'm not sure 
why 32, but here are a few reasons why it's probably not a good idea to extend 
it further

(1) manageability. once you have more than a dozen ACLs per file, it becomes 
hard to manage, error-prone.
(2) NameNode heap size. Especially in a large cluster with hundreds of millions 
of files, each inode occupies more bytes of heap. The memory pressure becomes 
even worse.
(3) serialization cost. We currently serialize the files under a directory to a 
protobuf message, which is limited to 64mb (default), and as the result we 
limit the max number of files per directory to 1 million. Allowing more ACL 
entries per file means more serialized bytes per file, and you may run into the 
protobuf message limit for a large directory well before 1 million files.

For these reasons I usually recommend users to use external authorization 
providers like Sentry or Ranger to delete the authorization work to a separate 
entity.

> Make the maximum number of ACLs entries configurable
> 
>
> Key: HDFS-15248
> URL: https://issues.apache.org/jira/browse/HDFS-15248
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15248.001.patch, HDFS-15248.patch
>
>
> For big cluster, the hardcode 32 of ACLs maximum number is not enough, make 
> it configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15248) Make the maximum number of ACLs entries configurable

2020-03-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070380#comment-17070380
 ] 

Íñigo Goiri edited comment on HDFS-15248 at 3/29/20, 3:14 PM:
--

Thanks [~weichiu] for chiming it.
That was the reason why I was checking for the use case.
I'll let you decide if this should be done or not.

Regarding the patch itself, the unit test should do something like:
{code}
LambdaTestUtils.intercept(
  AclException.class,
  () -> filterDefaultAclEntries(existing));

LambdaTestUtils.intercept(
  AclException.class,
  "which exceeds maximum of",
  () -> filterDefaultAclEntries(existing));
{code}


was (Author: elgoiri):
Thanks [~weichiu] for chiming it.
That was the reason why I was checking for the use case.
I'll let you decide if this should be done or not.

Regarding the patch itself, the unit test should do something like:
{code}
LambdaTestUtils.intercept(
  AclException.class,
  () -> filterDefaultAclEntries(existing));

LambdaTestUtils.intercept(
  AclException.class,
  "which exceeds maximum of",
  () -> filterDefaultAclEntries(existing));
{code{

> Make the maximum number of ACLs entries configurable
> 
>
> Key: HDFS-15248
> URL: https://issues.apache.org/jira/browse/HDFS-15248
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15248.001.patch, HDFS-15248.patch
>
>
> For big cluster, the hardcode 32 of ACLs maximum number is not enough, make 
> it configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org