Something to note, as I just triggered this bug on a replica I built today, this is a new bug.

I have 2 pretty much identical consumer only replicas. The bug triggers 56 times on the new one and not the old one. The replica is a consumer only, and not a hub. Same ldif file (for db and suffix) to make like 9 of these boxes, this health check happens only on the new ones.

Old box version:

1.4.3.8-6
389-ds-base python3-lib389 389-ds-base-libs

New box version:

1.4.3.22

Trying to downgrade the box.. bah cant start 389 after the downgrade lol. Rebuilding.

1.4.3.8-7 downgrade (dont see any in between version from 1.4.3.8-7 to 1.4.3.22) .. Ok and downgrading helped! the dsctl healthcheck works!

Hope this helps,

Gary



On 4/15/21 2:39 PM, Mark Reynolds wrote:

On 4/15/21 4:23 PM, Gary Waters wrote:

These entries look fine.  I'm assuming you are running this on a hub or consumer, is that correct? Does it work correctly on the supplier replica?  I think the "nsslapd-state=referral on update" might be tripping up the healthcheck.

Yes I am using this as a hub. The same ldif I use to make the suffix I use to make the suppliers and consumers, and they work fine (and dsctl healthcheck says they are ok).  The setting of nsslapd-state was set by the dsconf command I sent before. I checked a production hub I have (which this one will eventually replace), and that is the correct setting.

Perhaps this is an issue with dsctl's healthcheck then.

There is definitely a bug, I was just trying to narrow it down. I'll try and look into this tomorrow...


-Gary


_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to