On 7/28/20 12:30 PM, Winstanley, Anthony wrote:
We're running with 458 ACIs right now (verified the same number on all nodes), 
running on RHEL 7 with:
389-admin-1.1.46-1.el7.x86_64
389-admin-console-1.1.12-1.el7.noarch
389-admin-console-doc-1.1.12-1.el7.noarch
389-adminutil-1.1.22-2.el7.x86_64
389-adminutil-devel-1.1.22-2.el7.x86_64
389-console-1.1.19-6.el7.noarch
389-ds-1.2.2-6.el7.noarch
389-ds-base-1.3.10.1-9.el7_8.x86_64
389-ds-base-devel-1.3.10.1-9.el7_8.x86_64
389-ds-base-libs-1.3.10.1-9.el7_8.x86_64
389-ds-base-snmp-1.3.10.1-9.el7_8.x86_64
389-ds-console-1.2.16-1.el7.noarch
389-ds-console-doc-1.2.16-1.el7.noarch
389-dsgw-1.1.11-5.el7.x86_64

nsslapd-aclpb-max-selected-acl is set to 2000. (And I'm sure I set it long ago 
where my memory is fuzzy; thanks for the reminder. I'll put a note somewhere 
I'll find it for next time...)

So really, we should not be experiencing any ACI-related issue given that 2000 
> 458.

I'll note that daemon restarts of the affected nodes cleared up any issues.

I'm feeling better about this now... One final question:
What about ACL failures, where a single ACI fails (say what it targets has been 
removed). Is there any chance that the failure of the ACL plugin to load one 
ACI would affect the loading of other valid ACIs?
(I took the opportunity to fix that sort of issue reported in the logs by the 
ACL plugin and have no idea if that affected anything but the actual failed 
ACIs themselves. Again, a restart fixed things, but the restart was after my 
cleanup...)

First, there is an ACI cache that could be corrupted, which explains why a restart fixes the issue.  Second, you are saying you "cleaned" some things up.  What exactly did you do?  And what were the exact errors messages you saw that led you to the "clean up"?

If the ACI's start failing again then there is a good chance you found a bug in the ACL cache.  What would be interesting to see would be the error log with ACI logging enabled [1], where you have the output when it works, and then you have the output once it starts failing.  Then we would have something to compare.  With so many aci's this is going to be tedious, but it might be the only option besides identifying a reproducible test case.

[1] https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/configuration_command_and_file_reference/core_server_configuration_reference#cnconfig-nsslapd_errorlog_level_Error_Log_Level Use ldapmodify and set the level to "128" - this will impact server performance so only enable it for brief tests, then set it back to "0".

Mark


Thanks again,
Anthony

-----Original Message-----
From: Ludwig Krispenz <[email protected]>
Sent: July 27, 2020 11:41 PM
To: [email protected]
Subject: [389-users] Re: Limitations with large numbers of ACIs?


On 28.07.20 03:57, William Brown wrote:
On 28 Jul 2020, at 08:11, Winstanley, Anthony <[email protected]> wrote:

Hello,
We’ve got a large 389ds installation and have run into issues with ACIs not always behaving as expected. Where an ACI working on one node is not doing anything at all on a replicated node. Sometimes reducing the number of ACIs fixes the issue. Sometimes restarting a node fixes it. I have not found anything in an error log that has given me any pointers as to what the problem(s) might be. So my questions:
Are there config attributes that control the working of ACIs? What are they and 
how should they be used?
Are there any limitations for the number and size of ACIs per 389ds instance or 
database?
No there are no limits I am aware of.
There is a limit of selected acis: aclpb_max_selected_acls

It is using the default of

#define DEFAULT_ACLPB_MAX_SELECTED_ACLS 200

or the value from "nsslapd-aclpb-max-selected-acls"

Is there any best practices for troubleshooting ACI issues (like where some 
work on one server but not another)? Am I missing a log file somewhere?
Is there any documentation to consult specific to ACI operation? (Beyond 
syntax…) Source code even?
To really answer this and help you we need to know:

* What distro you are running
* What version of 389-ds (`rpm -qa | grep -i 389` for example)
* How many ACI's you have in your database (ldapsearch -H ldaps://... -x -b 
'your dn' -D 'cn=Directory Manager' -w (aci=*) aci ). Please confirm this on 
all servers in the replication topology.
* An example of the ACI that is failing on one server but works on the other, 
and sample entries about what they are trying to access or achieve

Thanks,

Thanks,
Anthony Winstanley
The University of British Columbia
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]

--

389 Directory Server Development Team
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]

Reply via email to