-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/74138/#review224744
-----------------------------------------------------------




security-admin/src/main/java/org/apache/ranger/common/db/BaseDao.java
Lines 46 (patched)
<https://reviews.apache.org/r/74138/#comment313522>

    Consider reading the batch size from configuration, like:
      private static final String PROP_BATCH_DELETE_BATCH_SIZE    = 
"ranger.admin.dao.batch.delete.batch.size";
      private static final int    DEFAULT_BATCH_DELETE_BATCH_SIZE = 1000;
      private static final int    BATCH_DELETE_BATCH_SIZE;
      
      static {
        BATCH_DELETE_BATCH_SIZE = 
RangerAdminConfig.getInstance().getInt(PROP_BATCH_DELETE_BATCH_SIZE, 
DEFAULT_BATCH_DELETE_BATCH_SIZE);
        
        LOG.info(PROP_BATCH_DELETE_BATCH_SIZE + "=" + BATCH_DELETE_BATCH_SIZE);
      }



security-admin/src/main/java/org/apache/ranger/common/db/BaseDao.java
Lines 120 (patched)
<https://reviews.apache.org/r/74138/#comment313523>

    I suggest providing an option to retain existing behavior, for example when 
the configuration is set to "-1".


- Madhan Neethiraj


On Sept. 26, 2022, 6:12 a.m., Ramachandran Krishnan wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/74138/
> -----------------------------------------------------------
> 
> (Updated Sept. 26, 2022, 6:12 a.m.)
> 
> 
> Review request for ranger, Abhay Kulkarni, Madhan Neethiraj, Nikhil P, 
> Pradeep Agrawal, Ramesh Mani, Selvamohan Neethiraj, Sailaja Polavarapu, and 
> Velmurugan Periasamy.
> 
> 
> Bugs: RANGER-3929
>     https://issues.apache.org/jira/browse/RANGER-3929
> 
> 
> Repository: ranger
> 
> 
> Description
> -------
> 
> As of  now, we are not handling the max values that will be passed into the 
> IN clause SQL queries
> 
> Most of the databases which support maximum of 1000 values in the IN clause 
> 
>  getEntityManager()
> .createNamedQuery("XXPolicyRefUser.deleteByIds", tClass)
> .setParameter("ids", ids).executeUpdate();
>  
> We will get the issue when the size grows more than 1000 otherwise we will 
> not face any issue.
> 
> We  need to fix this to avoid the potential bug in the later point of time
> 
> 
> Diffs
> -----
> 
>   security-admin/src/main/java/org/apache/ranger/common/db/BaseDao.java 
> fc56ff88b 
>   
> security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefAccessTypeDao.java
>  a8233e30c 
>   
> security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefConditionDao.java
>  bc17fcdb0 
>   
> security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefDataMaskTypeDao.java
>  df5f7cd94 
>   security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefGroupDao.java 
> dc41aeadb 
>   
> security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefResourceDao.java 
> 738c6ff49 
>   security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefRoleDao.java 
> 35433c758 
>   security-admin/src/main/java/org/apache/ranger/db/XXPolicyRefUserDao.java 
> eced7b261 
> 
> 
> Diff: https://reviews.apache.org/r/74138/diff/1/
> 
> 
> Testing
> -------
> 
> Created more than 2k for users and assigned to policy .
> After delete the policy which will internally triggers the batch delete.
> 
> 
> Thanks,
> 
> Ramachandran Krishnan
> 
>

Reply via email to