[389-users] Re: Directory Administrators vs. Password Administrators

2024-03-18 Thread Thierry Bordaz

Hi,

I assume your question is about privileges 'Directory manager' vs 
'Password Administrators'.


They are both allowed to bypass the password policy (global or local) 
and set any value they want. While 'Directory manager' does not need 
specific ACI, Administrators belonging to 'passwordAdminDN' group do 
need ACIs granting read/write on password attributes [1]


[1] https://www.port389.org/docs/389ds/design/password-administrator.html

best regards
thierry

On 3/16/24 00:04, tda...@arizona.edu wrote:

I see tn the docs that you can make a Password Administrators group, like so:

dn: cn=config
changetype: modify
replace: passwordAdminDN
passwordAdminDN: cn=Passwd Admins,ou=groups,dc=example,dc=com

I'm curious though, what privileges does a Directory Administrator have over 
and above one of these Password Administrators.
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Determining max CSN of running server

2024-03-01 Thread Thierry Bordaz


On 2/29/24 21:31, William Faulk wrote:

Thanks, Pierre and Thierry.

After quite some time of poring over these debug logs, I've found some 
anomalies and they seem like they're matching up with the idea that the 
affected replica isn't updating its own RUV correctly.

The logs show a change being made, and it lists the CSN of the change. The 
first anomalies are here, but they probably aren't terribly significant. The 
CSN includes a timestamp, and the timestamp on this CSN is 11 hours into the 
future from when the change was made and logged. Also, the next part of the CSN 
is supposed to be a serial number for when there are changes made during the 
same second of the timestamp. In the case I was looking at, that serial was 
0xb231. I'm certain that this replica didn't record another 45000 changes in 
that second.


Hi William,

Are you running DS on a VM, container, HW ?
The fact that the CSN timestamp is some time in the future is not 
frequent but can happen. Generated CSN should always been increasing, so 
the generation of CSN ajust its timestamp with the received CSN.
What looks weird is the number of serial number. Do you have a full 
error log sample where we can see sequence number moving to such high 
number (0xb231) ? C





Then it shows the server committing the change to the changelog. It shows it 
"processing data" for over 16000 other CSNs, and it takes about 25 seconds to 
complete.

It then starts a replication session with the peer and prints out the peer's 
(consumer's) RUV and then its own (supplier's) RUV. The RUV it prints out for 
itself shows the maxCSN for itself with a timestamp from almost 4 months ago. 
It is greater than the maxCSN for itself in the consumer's RUV, though, by a 
little. (The replicagenerations are equal, though.)
IIUC the consumer is currently catching up. Is the RUV, on the consumer, 
evolving ?


It then claims to send 7 changes, all of which are skipped because "empty". It then 
claims that there are "No more updates to send" and releases the consumer and eventually 
closes the connection.
Do you have fractional replication ? (some attributes are skipped from 
replication)


I like the idea that there's a list of pending operations that's blocking RUV updates. Is 
there any way for me to examine this list? That said, I do think it updated its own 
maxCSN in its own RUV by a few hours. The peer I'm looking at does seem to reflect the 
increased maxCSN for the bad replica in the RUV I can see in the "mapping 
tree". I've tried to reproduce this small update, but haven't been able to yet.
difficult to say. pending list has likely a different meaning in my 
understanding.


I also have another replica that seems to be experiencing the same problem, and 
I've restarted it with no improvement in symptoms. It might be different, 
though. It doesn't look like it discarded its changelog.

I definitely don't relish reinitializing from this bad replica, though. I'd 
have to perform a rolling reinitialization throughout our whole environment, 
and it takes ages and a lot of effort.


--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Determining max CSN of running server

2024-03-01 Thread Thierry Bordaz
I think Pierre may refer to 
http://www.port389.org/docs/389ds/design/csn-pending-lists-and-ruv-update.html


https://pagure.io/389-ds-base/issue/49287

On 2/29/24 23:21, William Faulk wrote:

FYI: There is a list of pending operations to ensure that the RUV is not
updated while an older operation is not yet completed. And I suspect that
you hit a bug about this list. I remember that we fixed something in that
area a few years ago ...

I think I found it, or something closely related.

https://github.com/389ds/389-ds-base/pull/4553
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Determining max CSN of running server

2024-02-29 Thread Thierry Bordaz


On 2/29/24 05:12, William Faulk wrote:

Might be worth re-reading

Well, I still don't really know the details of the replication process.

I have deduced that changes originated on a replica seem to prompt that replica 
to start a replication process with its peers, but I don't really know what 
happens then.
Replication is done by replica agreement that is waken up when a new 
updates gets into the changelog. The new updates can be received 
directly from a LDAP client or from replication itself.

There's a comparison of the RUVs of the two replicas, but does the initiating 
system send its RUV to the receiver, or does it go the other way, or do both 
happen?
IIRC only the remote replica sends its RUV. Then the RA receiving the 
RUV will compare it with its own RUV to detect what is the oldest update 
that the remote replica ignore.

Does the comparison prompt the comparing system to send the changes it thinks 
the other system needs, or does it cause the comparing system to request new 
changes from the other?

Yes the RUV contains latest received updates for all the replicas.

Maybe none of this really makes much difference, but the lack of technical 
detail around this makes me just question everything.
It makes perfectly sense and show you already know deeply replication 
process.



It doesn't send a single CSN, the replication compares the RUVs and determines 
the
range of CSNs that are missing from the consumer.

Sure, but notionally any changes that originated on that replica would be 
reflected in the max CSN for itself in the RUV that is used to compare. And at 
least one side is sending its RUV to the other during the replication process.
Yes the remote replica (named consumer IIRC) sends back its RUV to the 
request send by the RA.



It's also not immediate. Between the server accepting a change (add, mod etc), 
the
change is associated to a CSN. But then there may be a delay before the two 
nodes actually
communicate and exchange data.

Sure, but the changes originated on this replica haven't made it to other 
replicas in weeks. This isn't a mere delay in replication.
Usually replication occurs in few seconds. if it is not replicated for 
weeks, then replicaiton is broken and you need to identify in the 
replication debug log from the both sides (supplier/consumer) the reason 
of that breakage



Generally you'd need replication logging (errorloglevel 8192). But it's very 
noisy
and can be hard to read. What you need to see is the ranges that they agree to 
send.

Okay. I've done that and haven't had a chance to pore through them yet.
Quite difficult to read, espcially if there are multiple RA playing 
around. You may look in parallel to the code to understand the purpose 
of those messages



Also remember CSN's are a monotonic lamport clock. This means they only ever 
advance
and can never step backwards. So they have some different properties to what 
you may
expect. If they ever go backwards I think the replication handler throws a 
pretty nasty
error.

I don't think it's going backwards. What I'm trying to rule out is that the 
replica is failing to advance its max CSN in the RUV being used to compare.
Comparison of RUV. You need to dump RUV on both servers 
(consumer/supplier) then compare PER replica the maxcsn. The replication 
will start from the CSN that is the smallest of the maxcsn. So  a maxCSN 
may not move until all the others are in sync



I *think* so. It's been a while since I had to look. The nsds50ruv shows the 
ruv of
the server, and I think the other replica entries are "what the peers ruv was 
last
time".

Well, it's at least nice to hear that my guess at least isn't asinine. :)


replication monitoring code in newer versions does this for you, so I'd probably
advise you attempt to upgrade your environment. 1.3 is really old at this point

I've been trying to get the current environment stable enough that I feel 
comfortable going through the relatively lengthy upgrade process. I think I'm 
going to have to adjust my comfort level.


I'm not sure if even RH or SUSE still support that version anymore).

RedHat does, as it's what's in RHEL7.9, which is supported for another, uh, 4 
months. They're working on this with me. I'm still just trying to understand 
the system better so that I can try to be productive while I'm waiting on them 
to come up with ideas.


The problem here is that to read the RUV's and then compare them, you need to 
read
each RUV from each server and then check if they are advancing (not that they 
are equal).

The problem is that the changes in my environment are few enough that all the 
replicas' RUVs _are_ equal the majority of the time. I'm not in front of that 
system as I respond right now, so my details might be wrong, but I'm asking 
about all of this because every RUV I see in all of the replicas is the same, 
and it shows a max CSN for this one replica that's much older than the CSNs I 
see it reference in the logs about changes 

[389-users] Re: 389 DS 2.3.6 on RHEL 9 replication over TLS

2024-01-26 Thread Thierry Bordaz
You may follow the doc to configure TLS on your both suppliers [1] and 
check the trusted CA on both side [2]. On troubleshooting side you may 
look at [3]


[1] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_enabling-tls-encrypted-connections-to-directory-server_securing-rhds#doc-wrapper
[2] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_changing-the-ca-trust-flagssecuring-rhds#doc-wrapper
[3] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_troubleshooting-replication-related-problems_configuring-and-managing-replication



On 1/25/24 23:55, Ciara Gadsden wrote:

Idk how to do that lol

Sent from my Boost Samsung Galaxy A23 5G
Get Outlook for Android 

*From:* Simon Pichugin 
*Sent:* Thursday, January 25, 2024 5:44:57 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>

*Cc:* alexander_nazare...@harvard.edu 
*Subject:* [389-users] Re: 389 DS 2.3.6 on RHEL 9 replication over TLS
Hello Alex,
I think we need a bit more information here.

First of all, could you please run the "dsconf repl-agmt create" 
(LDAPS one) with "-v" flag? It will give a detailed verbose output.
Also, I recommend checking the server's error and access log for more 
information why it fails (additionally, you may enable at least 
16384+8192 (Default + Replication debugging) in nsslapd-errorlog-level).


As for possible issues and actions, at first glance, I recommend 
checking that the TLS certificates used are correctly installed and 
trusted on both the supplier and consumer instances. It's important 
that the instances trust each other; even though ldapsearch (OpenLDAP 
client) on the supplier already trusts the consumer machine, it's not 
enough.


Sincerely,
Simon



On Thu, Jan 25, 2024 at 1:07 PM Nazarenko, Alexander 
 wrote:


Hello colleagues,

Lately we started looking into 389 DS 2.3.6 on RHEL 9 platform.

We followed instructions Configuring and managing replication


on Red Hat site to establish replication between two remote instances,

The instances where previously configured to support TLS channel
on port 636 (Enabling TLS-encrypted connections to Directory
Server

),
and we made sure ldapsearch is working with LDAPS:// protocol with
the certificate verification (TLS_REQCERT demand).

The following issue with the replication over TLS was observed:

After we ran the command below to configure secure replication:

dsconf -D "cn=Directory Manager"  -w ***
ldaps://server.example.edu  repl-agmt
create --suffix "dc=example,dc=com" --host "consumer.example.edu
" --port 636 --conn-protocol=LDAPS
--bind-dn "cn=replication manager,cn=config" --bind-passwd "***"
--bind-method=SIMPLE --init consumer.example.edu-RO

the error occurred:

Error (-1) Problem connecting to replica - LDAP error: Can't
contact LDAP server (connection error)

We double-checked that after we configure clear text replication
with the command:

dsconf -D "cn=Directory Manager"  -w ***
ldaps://server.example.edu  repl-agmt
create --suffix "dc=example,dc=com" --host "consumer.example.edu
" --port 389 --conn-protocol=LDAP
--bind-dn "cn=replication manager,cn=config" --bind-passwd "***"
--bind-method=SIMPLE --init 10.140.133.36-RO

no problem occurred, and the replication completed successfully.

My question is whether this means the replication over TLS
required different config steps, and if yes – what they are?

Thank you,

- Alex

--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue


--
___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe 

[389-users] Re: 389-ds-base name log pipe problems

2023-12-07 Thread Thierry Bordaz

Hi,

It would be helpful to have some details how you configured the log pipe 
and did the tests. I wonder if it could be related to 
https://github.com/389ds/389-ds-base/issues/198.


regards
thierry

On 12/7/23 09:06, Nyquist wrote:

Hello

  


We are using 18 389-ds-base-1.3.10.2,

Recently, one server (A) was set up to log to a pipe.

  


Afterwards, a phenomenon occurs where all server connections are occupied by an 
external system.

The external system has stopped.

  


After this incident, the servers returned to normal condition, but

Only server A had intermittent port 389 inaccessibility and delays.

After this, we determined that the problem on server A was due to logging in 
the pipe.

After changing the pipe to a file, I restarted the server.

After that the problem went away.

  


We want to know the exact cause of the delay and inaccessibility that occurred 
on server A.

How can logging to pipe affect directory server port 389 access?

Could it be a file descriptor related issue? Why were the other 17 servers able 
to recover to normal?

  


Thanks
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Documentation as to how replication works

2023-11-16 Thread Thierry Bordaz


On 11/16/23 02:50, John Apple II wrote:

Hi, William,

  I am working on trying to figure out how to some basic monitoring 
IdM Replication with a non-Directory-Manager service-account for some 
internal work I do where we use IdM, and I'm trying to work on 
figuring out how to create a service-account that will allow some 
basic monitoring for LDAP replication between the IdM nodes (hopefully 
similar to cipa?).
FYI on DS side we are prototyping a new monitoring mechanism as 
monitoring replication is a long pending needs and current mechanisms 
may have some drawbacks (complexity or false negative/positive)


I've been looking for information all over the web (including this 
list) for this for about a month now. If you've made any progress on 
something similar related to this, I'd be interested in 
collaborating.  I've come up with a basic LDIF and some test python 
code to validate the ACIs for the service-account, but nothing else as 
it took me 5 days just to figure out how to write ACI's.


In case it can help anyone in the future, my current LDIF follows - 
the goal is to individually pull each server's LDAP entries directly 
(as a start) and then compare them, but it allows the service-account 
to access the replication data in the directory as well as the 
sysaccounts directory itself.



SUFFIX="dc=domain,dc=example,dc=com"
ldif follows:

dn: uid=replmonitor,cn=sysaccounts,cn=etc,SUFFIX
changetype: add
objectclass: account
objectclass: simplesecurityobject
uid: replmonitor
userPassword: NOTAREALPASSWORD
passwordExpirationTime: 20381231235959Z
nsIdleTimeout: 0

dn: cn=sysaccounts,cn=etc,SUFFIX
changetype: modify
add: aci
aci: (targetattr != "userPassword || krbPrincipalKey || 
sambaLMPassword || sambaNTPassword || passwordHistory || krbMKey || 
krbPrincipalName || krbCanonicalName || krbPwdHistory || 
krbLastPwdChange || krbExtraData || krbLastSuccessfulAuth || 
krbLastFailedAuth || ipaUniqueId || memberOf || enrolledBy || 
ipaNTHash || ipaProtectedOperation || aci || member") (version 3.0; 
acl "allow (compare,read,search) of sysaccounts by replmonitor"; 
allow(search,read,compare) userdn = 
"ldap:///uid=replmonitor,cn=sysaccounts,cn=etc,SUFFIX;;)


dn: cn=config
changetype: modify
add: aci
aci: (targetattr != "userPassword || krbPrincipalKey || 
sambaLMPassword || sambaNTPassword || passwordHistory || krbMKey || 
krbPrincipalName ||  krbCanonicalName || krbPwdHistory || 
krbLastPwdChange || krbExtraData || krbLastSuccessfulAuth || 
krbLastFailedAuth || ipaUniqueId || memberOf || enrolledBy || 
ipaNTHash || ipaProtectedOperation || aci || member") (version 3.0; 
acl "allow (compare,read,search) of cn=config by replmonitor"; 
allow(search,read,compare) userdn = 
"ldap:///uid=replmonitor,cn=sysaccounts,cn=etc,SUFFIX;;)




John Apple II

On 16/11/23 03:59, William Faulk wrote:
I am running a RedHat IdM environment and am having regular problems 
with missed replications. I want to understand how it's supposed to 
work better so that I can make reasonable hypotheses to test, but I 
cannot seem to find any in-depth documentation for it. Every time I 
think I start to piece together an understanding, experimentation 
makes it fall apart. Can someone either point me to some 
documentation or help me understand how it works?


In particular, IdM implements multimaster replication, and I'm 
initially trying to understand how changes are replicated in that 
environment. What I think I understand is that changes beget CSNs, 
which are comprised of a timestamp and a replica ID, and some sort of 
comparison is made between the most recent CSNs in order to determine 
what changes need to be sent to the remote side. Does each replica 
keep a list of CSNs that have been sent to each other replica? Just 
the replicas that it peers with? Can I see this data? (I thought it 
might be in the nsds5replicationagreement entries, but the nsds50ruv 
values there don't seem to change.) But it feels like it doesn't keep 
that data, because then what would be the point of comparing the CSN 
values be? Anyway, these are the types of questions I'm looking to 
understand. Can anyone help, please?



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 

[389-users] Re: Documentation as to how replication works

2023-11-15 Thread Thierry Bordaz

Hi,

The explanation below looks excellent to me. You may also have a look at 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/deployment_guide/designing_the_replication_process#doc-wrapper


Regarding the initial concern "having regular problems with missed 
replications". A key element is that there is no synchronous 
replication, an update is not sync immediately to all replicas. A LDAP 
client req an update on one replica (original replica) that will 
propagate the update to others replicas (themselves will be able to 
propagate it to a next replica ("hops")). So there may be a delay 
(replication lag) between the original update and the time the last 
replica will receive it. Usually the delay is few seconds but may depend 
on may factors.


As you noticed, updates are identified with CSN that are logged in 
access log. If you suspect that an update is missing, you need to check 
if the related CSN is present in the remote replicas access log files. 
note that access logs are buffered.


best regards
thierry

On 11/15/23 18:12, David Boreham wrote:
I'm not sure about doc, but the basic idea iirc is that a vector 
clock[1] (called replica update vector) is constructed from the 
sequence numbers from each node. Therefore it isn't necessary to keep 
track of a list of CSNs, only compare them to determine if another 
node is caught up with, or behind the state for the sending node. 
Using this scheme, each node connects to each other and by asking the 
other node for its current ruv can determine which if any of the 
changes it has need to be propagated to the peer. These are sent as 
(almost) regular LDAP operations: add, modify, delete. The consumer 
server then decides how to process each operation such that 
consistency is preserved (all nodes converge to the same state). e.g. 
it might skip an update because the current state for the entry is 
ahead of the update. It's what nowadays would be called a CDRT scheme, 
but that term didn't exist when the DS was devloped.


[1] https://en.wikipedia.org/wiki/Vector_clock

On Wed, Nov 15, 2023, at 9:59 AM, William Faulk wrote:
I am running a RedHat IdM environment and am having regular problems 
with missed replications. I want to understand how it's supposed to 
work better so that I can make reasonable hypotheses to test, but I 
cannot seem to find any in-depth documentation for it. Every time I 
think I start to piece together an understanding, experimentation 
makes it fall apart. Can someone either point me to some 
documentation or help me understand how it works?


In particular, IdM implements multimaster replication, and I'm 
initially trying to understand how changes are replicated in that 
environment. What I think I understand is that changes beget CSNs, 
which are comprised of a timestamp and a replica ID, and some sort of 
comparison is made between the most recent CSNs in order to determine 
what changes need to be sent to the remote side. Does each replica 
keep a list of CSNs that have been sent to each other replica? Just 
the replicas that it peers with? Can I see this data? (I thought it 
might be in the nsds5replicationagreement entries, but the nsds50ruv 
values there don't seem to change.) But it feels like it doesn't keep 
that data, because then what would be the point of comparing the CSN 
values be? Anyway, these are the types of questions I'm looking to 
understand. Can anyone help, please?


--
William Faulk
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue





___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 

[389-users] Re: Allow User to Change Expired Password

2023-11-10 Thread Thierry Bordaz


On 11/8/23 15:55, Aaron Enders wrote:

Hello,

Question: Is there a way to allow users to change their password if the 
password has already expired?

I've been fighting this issue for months now and havn't found a resolution. My 
users are able to change their password if it is not expired however once 
expired even in the Grace login period they are unable to change due to 
anonomus binds not allowed. Is there an ACI that would apply here? My problem 
is I use a VPN solution which only allerts the users the password is expiring 
however they do not have a way to change.


Hi

I am surprised because during grace period the user should be able to 
successfully bind and change his password. Being authenticated the ACI 
should allow to the user to change his own password.
In access logs you may check if BIND is successful or not. If it is not 
then I would suspect grace period to be over or a bug.


best regards
Thierry



Thanks
Aaron
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: err=19 in a BIND operation

2023-10-05 Thread Thierry Bordaz


On 10/5/23 14:58, Ciber Center wrote:

Hi team,

I'm getting an result err=19 in a BIND operation, Anyone knows why this can 
happen?

this is the connection trace

conn=2894185 fd=205 slot=205 connection from client_ip to server_ip
conn=2894185 op=0 BIND dn="uid=user1,o=applications,o=school,c=es" method=128 
version=3
conn=2894185 op=0 RESULT err=19 tag=97 nentries=0 etime=0.000494384
conn=2894185 op=1 UNBIND
conn=2894185 op=1 fd=205 closed - U1

I understood that error code 19 occurs only in MOD operations, is it correct?


I agree, err=19 (LDAP_CONSTRAINT_VIOLATION) is likely the consequence of 
an internal MOD during a BIND. I would guess password policy or account 
policy.


You may enable internal operation logging (core and plugins) with

**

replace: nsslapd-plugin-logging
nsslapd-plugin-logging: on
-
replace: nsslapd-accesslog-level
nsslapd-accesslog-level: 260
-
replace: nsslapd-auditlog-logging-enabled
nsslapd-auditlog-logging-enabled: on
-
replace: nsslapd-auditfaillog-logging-enabled
nsslapd-auditfaillog-logging-enabled: on

**



Thanks in advance.
___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Setting "lock" time of an account in the future

2023-10-03 Thread Thierry Bordaz


On 10/3/23 09:34, Cenk Y. wrote:

Thanks Mark, Thierry,

I've looked quite a bit into account policy. It allows locking an 
account after an inactivity limit, but from my understanding, it 
doesn't offer a way to lock it in a pre-configured future time without 
inactivity.



Not only inactivity but also account expiration (createtimestamp). 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/account-policy-plugin#account-policy-plugin-config


regards
thierry



I think this would be a useful feature. I may open a RFE.

Cheers
Cenk

On Tue, Oct 3, 2023 at 8:55 AM Thierry Bordaz  wrote:


On 10/3/23 01:11, Mark Reynolds wrote:



On 10/2/23 4:13 AM, Cenk Y. wrote:

Hi Mark, thanks for the response.

We already use password lockout plugin, but what I need is the
opposite.

I want to
* Create an account, activate it
* Set an expiration date, so that after that date account is locked.



Hi Cenk,

I agree with Mark, password base expiration is likely not what you
are looking for (because of reset).

Before opening a RFE, you may check if the account policy plugin
may match you need

https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/account-policy-plugin

best regards
thierry


Yeah there is no way to "lock" an account that way. You can set
the password to expire, but its not the same thing and a password
reset will bump that expiration time anyway.

Please file an RFE for this feature, but it could take some time
until it's implemented.

https://github.com/389ds/389-ds-base/issues/new

Thanks,
Mark



Cheers
Cenk

On Fri, Sep 29, 2023 at 9:50 PM Mark Reynolds
 wrote:

Actually, I was wrong there is more you need to do.

You need to enable account lockout and set a max failure count:

# dsconf slapd-INSTANCE config set passwordLockout=on
passwordMaxFailure=3

Then set in each user entry:

 passwordRetryCount: 3  --> number equal to
passwordMaxFailure

 retryCountResetTime: 20230929193912Z   --> you must
calculate this
value (and use it for these two attributes)

 accountUnlockTime: 20230929193912Z


That works for me.

HTH,

Mark


On 9/29/23 11:40 AM, Cenk Y. wrote:
> Hello,
>
> We are running 389-ds-base.2.2.7 .
>
> While creating accounts, sometimes we know until when they
need to be
> active. Is there a way to manually set a "expiration date"
for the
> account, so after that date nsAccount is set to true?
>
> Having gone through rhds and 389-ds pages, it seems it's
only possible
> to create a policy to deactivate accounts after an
inactivity limit.
>
> I can always create a mechanism myself (such as adding a
new attribute
> and checking it by a cron job ...) , but I want to see if
there is a
> native way to do this?
>
> Thanks
> Cenk
>
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue

-- 
Directory Server Development Team


-- 
Directory Server Development Team


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archi

[389-users] Re: Setting "lock" time of an account in the future

2023-10-03 Thread Thierry Bordaz


On 10/3/23 01:11, Mark Reynolds wrote:



On 10/2/23 4:13 AM, Cenk Y. wrote:

Hi Mark, thanks for the response.

We already use password lockout plugin, but what I need is the opposite.

I want to
* Create an account, activate it
* Set an expiration date, so that after that date account is locked.



Hi Cenk,

I agree with Mark, password base expiration is likely not what you are 
looking for (because of reset).


Before opening a RFE, you may check if the account policy plugin may 
match you need 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/account-policy-plugin


best regards
thierry

Yeah there is no way to "lock" an account that way.  You can set the 
password to expire, but its not the same thing and a password reset 
will bump that expiration time anyway.


Please file an RFE for this feature, but it could take some time until 
it's implemented.


https://github.com/389ds/389-ds-base/issues/new

Thanks,
Mark



Cheers
Cenk

On Fri, Sep 29, 2023 at 9:50 PM Mark Reynolds  
wrote:


Actually, I was wrong there is more you need to do.

You need to enable account lockout and set a max failure count:

# dsconf slapd-INSTANCE config set passwordLockout=on
passwordMaxFailure=3

Then set in each user entry:

 passwordRetryCount: 3  --> number equal to passwordMaxFailure

 retryCountResetTime: 20230929193912Z   --> you must
calculate this
value (and use it for these two attributes)

 accountUnlockTime: 20230929193912Z


That works for me.

HTH,

Mark


On 9/29/23 11:40 AM, Cenk Y. wrote:
> Hello,
>
> We are running 389-ds-base.2.2.7 .
>
> While creating accounts, sometimes we know until when they need
to be
> active. Is there a way to manually set a "expiration date" for the
> account, so after that date nsAccount is set to true?
>
> Having gone through rhds and 389-ds pages, it seems it's only
possible
> to create a policy to deactivate accounts after an inactivity
limit.
>
> I can always create a mechanism myself (such as adding a new
attribute
> and checking it by a cron job ...) , but I want to see if there
is a
> native way to do this?
>
> Thanks
> Cenk
>
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue

-- 
Directory Server Development Team



--
Directory Server Development Team

___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-15 Thread Thierry Bordaz


On 9/13/23 19:57, tda...@arizona.edu wrote:

Thanks for the quick reply. My issue is this:

Server A has two OUs, call them ou=A and ou=B. Server B has two OUs, ou=A 
(empty) and ou=C. I want to copy the data from ou=A on server A to ou=A on 
server B. There are no ou=B entries in the export file from server A and for 
the import task I add to server B, I set this attribute:
nsExcludeSuffix: ou=B

When this task runs, it populates ou=A on server B but also completely deletes 
OU=C
Any way around this?
Except Mark's suggestion the only option I can think of would be to 
create dedicated backend/suffix for ou=A, ou=B and ou=C.


If you plan to do frequent export/import or even setup replication that 
could be an option.


regards
thierry



Thanks for the additional info on the other question, I guess my problem is 
that I don't understand the significance of entry USNs at all in 389 server, so 
I'm not sure how to deal with them in general and especially when it comes to 
instance migration.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-13 Thread Thierry Bordaz


On 9/13/23 18:44, tda...@arizona.edu wrote:

I've read this doc:
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/importing_and_exporting_data/importing-data-to-directory-server_importing-and-exporting-data

The export from server A to an LDIF file works and I've done some testing but 
it seems like the import feature always deletes existing OUs on server B that 
aren't in the exported LDIF file. Am I missing something? I'd like to simply 
get an LDIF of all the entries in Server A and populate only that OU in server 
B.
THe export works at suffix (or backend) level, it exports (from db to 
ldif) all the entries of that suffix. The import works in the opposite 
way, it imports all the entries from the ldif. If you want only a subset 
of the entries you need to edit LDIF and remove the entries you do not want.


Related, this bit is bewildering
Optional: By default, Directory Server sets the entry update sequence numbers 
(USNs) of all imported entries to 0. To set an alternative initial USN value, 
set the nsslapd-entryusn-import-initval parameter. For example, to set USN for 
all imported values to 12345, enter:


You are correct [1]. if usn plugin is enabled, all the imported entry 
will get attribute 'entryusn: (instead of '0')


[1] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/populating_directory_databases#entryusn-initval


best regards
thierry



I don't understand what this means or the consequences of taking the default or 
not. Server B is already in multi-supplier replication with other servers, so I 
worry about screwing that up with any import choices I might make.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389-ds freezes with deadlock

2023-09-13 Thread Thierry Bordaz


On 9/13/23 09:57, Julian Kippels wrote:

Hi Thierry,

> First you may install debuginfo it would help to get a better
> understanding what happens.

I will try to do that the next time it breaks. Unfortunately this is a 
production machine and I can't always take the time to do forensics. 
Sometimes I just have to quickly get it up running again and just 
restart the service completely. I have not yet found a way to trigger 
this in my lab environment.


> Do you know if it recovers after that high CPU peak ?

So far it has never recovered. I have seen the high CPU peak 7 or 8 
times now and it is always like this:

1. CPU usage peaks on 2 threads
ATM I assume it would be a MOD eating CPU while writing back to the 
changelog + trickling. You may get pstacks + top -H to confirm this
2. Admin from external server tells me that his system cannot do LDAP 
operations anymore.
stacktrace shows many updates waiting for the above MOD to complete. In 
extreme case, the pending MODs may exhaust workers make the server 
unresponsive
3. I try to do some ldapmodify operations, which succeed and get 
replicated correctly.
This is surprising, I would expect a MOD, on the server where a MOD is 
busy on CL, to hang as well. Are you doing your update on the same backend ?

4. At this point there are 2 options:
  a. Both the admin from the external server and I restart our 
services which temporarily fixes the issue
  b. I don't restart my system and after a few hours (where the CPU 
peak does not go away) dirsrv completely freezes up and does not 
accept any connections anymore.



You may look at the started MOD in the access log and check which one 
was hanging. Then compare its etime (using the csn) on the others servers.


Does it occur always on the same server ?

regards
thierry



> Regarding the unindexed search, you may check if 'changeNumber' is
> indexed (equality). It looks related to a sync_repl search with no
> cookie or old cookie. The search is on a different backend than Thread
> 62, so there is no conflict between the sync_repl unindexed search and
> update on thread62.

The equality index is set for changeNumber. I will assume that this is 
a different "problem" and has nothing to do with the high cpu load and 
freezes and not look further into it for the time.


Kind regards
Julian

Am 12.09.23 um 14:21 schrieb Thierry Bordaz:

Hi Julian,

Difficult to say. I do not recall specific issue but I know we fixed 
several bugs in sync_repl.


First you may install debuginfo it would help to get a better 
understanding what happens.


The two threads are likely Thread 62 and trickle thread (2 to 6) 
because of intensive db page update.

Do you know if it recovers after that high CPU peak ?
A possibility would be a large update to write back to the changelog. 
You may retrieve the problematic csn in access log (during high cpu) 
and dump the update from the changelog with dbscan (-k).


Regarding the unindexed search, you may check if 'changeNumber' is 
indexed (equality). It looks related to a sync_repl search with no 
cookie or old cookie. The search is on a different backend than 
Thread 62, so there is no conflict between the sync_repl unindexed 
search and update on thread62.


best regards
thierry

On 9/12/23 13:52, Julian Kippels wrote:

Hi,

there are two threads that are at 100% CPU utilisation. I did not 
start any admin task myself, maybe it is some built-in task that is 
doing this? Or could an unindexed search on the changelog be causing 
this?


I have noticed this message:
NOTICE - ldbm_back_search - Unindexed search: search 
base="cn=changelog" scope=1 filter="(changeNumber>=1)" conn=35871 op=1


There is an external server that is reading the changelog and 
syncing some stuff depending on that. I don't know why they are 
starting at changeNumber>=1, they probably should start way higher. 
If it is possible that this is the cause I will kick them to stop 
that ;)


I am running version 2.3.1 on Debian 12, installed from the Debian 
repositories.


Kind regards
Julian

Am 08.09.23 um 13:23 schrieb Thierry Bordaz:

Hi Julian,

It looks that an update (Thread 62) is either eating CPU either is 
blocked while update the changelog.
When it occurs could you run 'top -H -p ' to see if some 
thread are eating CPU.
Else (no cpu consumption), you may take a pstack and dump DB lock 
info (db_stat -N -C A -h /var/lib/dirsrv/db)


Did you run admin task (import/export/index...) before it occurred ?
What version are you running ?

best regards
Thierry

On 9/8/23 09:28, Julian Kippels wrote:

Hi,

it happened again and now I ran the gdb-command like Mark 
suggested. The Stacktrace is attached. Again I got this error 
message:


[07/Sep/2023:15:22:43.410333038 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


and the remote program that called also stopped working at that time.

Thanks
Julian Kippels

Am 28

[389-users] Re: 389-ds freezes with deadlock

2023-09-12 Thread Thierry Bordaz

Hi Julian,

Difficult to say. I do not recall specific issue but I know we fixed 
several bugs in sync_repl.


First you may install debuginfo it would help to get a better 
understanding what happens.


The two threads are likely Thread 62 and trickle thread (2 to 6) because 
of intensive db page update.

Do you know if it recovers after that high CPU peak ?
A possibility would be a large update to write back to the changelog. 
You may retrieve the problematic csn in access log (during high cpu) and 
dump the update from the changelog with dbscan (-k).


Regarding the unindexed search, you may check if 'changeNumber' is 
indexed (equality). It looks related to a sync_repl search with no 
cookie or old cookie. The search is on a different backend than Thread 
62, so there is no conflict between the sync_repl unindexed search and 
update on thread62.


best regards
thierry

On 9/12/23 13:52, Julian Kippels wrote:

Hi,

there are two threads that are at 100% CPU utilisation. I did not 
start any admin task myself, maybe it is some built-in task that is 
doing this? Or could an unindexed search on the changelog be causing 
this?


I have noticed this message:
NOTICE - ldbm_back_search - Unindexed search: search 
base="cn=changelog" scope=1 filter="(changeNumber>=1)" conn=35871 op=1


There is an external server that is reading the changelog and syncing 
some stuff depending on that. I don't know why they are starting at 
changeNumber>=1, they probably should start way higher. If it is 
possible that this is the cause I will kick them to stop that ;)


I am running version 2.3.1 on Debian 12, installed from the Debian 
repositories.


Kind regards
Julian

Am 08.09.23 um 13:23 schrieb Thierry Bordaz:

Hi Julian,

It looks that an update (Thread 62) is either eating CPU either is 
blocked while update the changelog.
When it occurs could you run 'top -H -p ' to see if some thread 
are eating CPU.
Else (no cpu consumption), you may take a pstack and dump DB lock 
info (db_stat -N -C A -h /var/lib/dirsrv/db)


Did you run admin task (import/export/index...) before it occurred ?
What version are you running ?

best regards
Thierry

On 9/8/23 09:28, Julian Kippels wrote:

Hi,

it happened again and now I ran the gdb-command like Mark suggested. 
The Stacktrace is attached. Again I got this error message:


[07/Sep/2023:15:22:43.410333038 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


and the remote program that called also stopped working at that time.

Thanks
Julian Kippels

Am 28.08.23 um 14:28 schrieb Thierry Bordaz:

Hi Julian,

I agree with Mark suggestion. If new connections are failing a 
pstack + error logged msg would be helpful.


Regarding the error logged. LDAP server relies on a database that, 
under pressure by multiple threads, may end into a db_lock 
deadlock. In such situation the DB, selects one deadlocking thread, 
returns a DB_Deadlock error to that thread while the others threads 
continue to proceed. This is very normal error that is caught by 
the server that simply retries to access the DB. If the same thread 
fails to many time, it stops retry and return a fatal error to the 
request.


In your case it reports code 1601 that is transient deadlock with 
retry. So the impacted request just retried and likely succeeded.


best regards
thierry

On 8/24/23 14:46, Mark Reynolds wrote:

Hi Julian,

It would be helpful to get a pstack/stacktrace so we can see where 
DS is stuck:


https://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%C2%A0Hangs 



Thanks,
Mark

On 8/24/23 4:13 AM, Julian Kippels wrote:

Hi,

I am using 389-ds Version 2.3.1 and have encountered the same 
error twice in three days now. There are some MOD operations and 
then I get a line like this in the errors-log:


[23/Aug/2023:13:27:17.971884067 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


After this the server keeps running, systemctl status says 
everything is fine, but new incoming connections are failing with 
timeouts.


Any advice would be welcome.

Thanks in advance
Julian Kippels

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
http

[389-users] Re: 389-ds freezes with deadlock

2023-09-08 Thread Thierry Bordaz

Hi Julian,

It looks that an update (Thread 62) is either eating CPU either is 
blocked while update the changelog.
When it occurs could you run 'top -H -p ' to see if some thread are 
eating CPU.
Else (no cpu consumption), you may take a pstack and dump DB lock info 
(db_stat -N -C A -h /var/lib/dirsrv/db)


Did you run admin task (import/export/index...) before it occurred ?
What version are you running ?

best regards
Thierry

On 9/8/23 09:28, Julian Kippels wrote:

Hi,

it happened again and now I ran the gdb-command like Mark suggested. 
The Stacktrace is attached. Again I got this error message:


[07/Sep/2023:15:22:43.410333038 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


and the remote program that called also stopped working at that time.

Thanks
Julian Kippels

Am 28.08.23 um 14:28 schrieb Thierry Bordaz:

Hi Julian,

I agree with Mark suggestion. If new connections are failing a pstack 
+ error logged msg would be helpful.


Regarding the error logged. LDAP server relies on a database that, 
under pressure by multiple threads, may end into a db_lock deadlock. 
In such situation the DB, selects one deadlocking thread, returns a 
DB_Deadlock error to that thread while the others threads continue to 
proceed. This is very normal error that is caught by the server that 
simply retries to access the DB. If the same thread fails to many 
time, it stops retry and return a fatal error to the request.


In your case it reports code 1601 that is transient deadlock with 
retry. So the impacted request just retried and likely succeeded.


best regards
thierry

On 8/24/23 14:46, Mark Reynolds wrote:

Hi Julian,

It would be helpful to get a pstack/stacktrace so we can see where 
DS is stuck:


https://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%C2%A0Hangs 



Thanks,
Mark

On 8/24/23 4:13 AM, Julian Kippels wrote:

Hi,

I am using 389-ds Version 2.3.1 and have encountered the same error 
twice in three days now. There are some MOD operations and then I 
get a line like this in the errors-log:


[23/Aug/2023:13:27:17.971884067 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


After this the server keeps running, systemctl status says 
everything is fine, but new incoming connections are failing with 
timeouts.


Any advice would be welcome.

Thanks in advance
Julian Kippels

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Crash with SEGV after compacting

2023-09-08 Thread Thierry Bordaz

Hi,

The crash is already fixed in 1.4.4 with 
https://github.com/389ds/389-ds-base/issues/4778
The fix was about scheduling of compaction but revisit this part of code 
and actually fixed this crash.


I fully agree with Mark suggestion to move to 2.x as this branch is not 
maintained except for few fixes like this one.


best regards
thierry

On 7/12/23 08:31, Mathieu Baudier wrote:

Hello,

please find a fresh backtrace below.
Is it more usable?
I can also send it directly to you per email, if more practical.

GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
 .

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/ns-slapd...
Reading symbols from 
/usr/lib/debug/.build-id/0a/598ea0bea9a19f0b1b8a501f3a274de07ebe32.debug...

warning: Can't open file /dev/shm/t4xNpb (deleted) during file-backed mapping 
note processing
[New LWP 300787]
[New LWP 300801]
[New LWP 300790]
[New LWP 300785]
[New LWP 300793]
[New LWP 300791]
[New LWP 300796]
[New LWP 300792]
[New LWP 300802]
[New LWP 300803]
[New LWP 300794]
[New LWP 300795]
[New LWP 300811]
[New LWP 300798]
[New LWP 300805]
[New LWP 300809]
[New LWP 300800]
[New LWP 300807]
[New LWP 300806]
[New LWP 300788]
[New LWP 300804]
[New LWP 300808]
[New LWP 300799]
[New LWP 300786]
[New LWP 300810]
[New LWP 300797]
[New LWP 300789]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/ns-slapd -D /etc/dirsrv/slapd-argeo -i 
/run/dirsrv/slapd-argeo.pid'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7f25f88f1c9c in bdb_db_compact_one_db (db=0x0, inst=0x7f25eec65ac4, 
inst@entry=0x55a07ecf9fc0)
 at ../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:2485
2485../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c: No such file or 
directory.
[Current thread is 1 (Thread 0x7f25eec67700 (LWP 300787))]

Thread 27 (Thread 0x7f25edc65700 (LWP 300789)):
#0  0x7f25fbf5fe23 in __GI___select (nfds=nfds@entry=0, 
readfds=readfds@entry=0x0, writefds=writefds@entry=0x0, 
exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7f25edc64bc0) at 
../sysdeps/unix/sysv/linux/select.c:41
 resultvar = 18446744073709551102
 sc_cancel_oldtype = 0
#1  0x7f25fc4aea60 in DS_Sleep (ticks=) at 
../ldap/servers/slapd/util.c:1045
 mSecs = 
 tm = {tv_sec = 0, tv_usec = 664565}
#2  0x7f25f88e66c4 in perfctrs_wait (milliseconds=milliseconds@entry=1000, 
priv=, db_env=) at 
../ldap/servers/slapd/back-ldbm/perfctrs.c:80
 interval = 
#3  0x7f25f88f23cc in perf_threadmain (param=0x55a07ecaf3b0) at 
../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:2969
 li = 0x55a07ecaf3b0
 priv = 
 pEnv = 0x55a07ec0
#4  0x7f25fc161941 in _pt_root (arg=0x55a07ee123d0) at ptthread.c:201
 rv = 
 thred = 0x55a07ee123d0
 detached = 1
 tid = 300789
#5  0x7f25fc100ea7 in start_thread (arg=) at 
pthread_create.c:477
 ret = 
 pd = 
 unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139800879716096, 
3890383068704389134, 140731200599582, 140731200599583, 139800879713536, 
8396800, -3767629126545852402, -3767662504817477618}, mask_was_saved = 0}}, 
priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, 
canceltype = 0}}}
 not_first_call = 0
#6  0x7f25fbf69a2f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 26 (Thread 0x7f25e57fa700 (LWP 300797)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55a07db1f94c 
) at ../sysdeps/nptl/futex-internal.h:186
 __ret = -512
 oldtype = 0
 err = 
 spin = 0
 buffer = {__routine = 0x7f25fc107540 <__condvar_cleanup_waiting>, 
__arg = 0x7f25e57f9a90, __canceltype = 2125198832, __prev = 0x0}
 cbuffer = {wseq = 35, cond = 0x55a07db1f920 , mutex = 
0x55a07db1f960 , private = 0}
 err = 
 g = 3850345072
 flags = 
 g1_start = 
 signals = 
 wseq = 35
 seq = 17
 private = 0
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55a07db1f960 
, cond=0x55a07db1f920 ) at pthread_cond_wait.c:508
 spin = 0
 buffer = {__routine = 0x7f25fc107540 <__condvar_cleanup_waiting>, 
__arg = 

[389-users] Re: 389-ds freezes with deadlock

2023-08-28 Thread Thierry Bordaz

Hi Julian,

I agree with Mark suggestion. If new connections are failing a pstack + 
error logged msg would be helpful.


Regarding the error logged. LDAP server relies on a database that, under 
pressure by multiple threads, may end into a db_lock deadlock. In such 
situation the DB, selects one deadlocking thread, returns a DB_Deadlock 
error to that thread while the others threads continue to proceed. This 
is very normal error that is caught by the server that simply retries to 
access the DB. If the same thread fails to many time, it stops retry and 
return a fatal error to the request.


In your case it reports code 1601 that is transient deadlock with retry. 
So the impacted request just retried and likely succeeded.


best regards
thierry

On 8/24/23 14:46, Mark Reynolds wrote:

Hi Julian,

It would be helpful to get a pstack/stacktrace so we can see where DS 
is stuck:


https://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%C2%A0Hangs

Thanks,
Mark

On 8/24/23 4:13 AM, Julian Kippels wrote:

Hi,

I am using 389-ds Version 2.3.1 and have encountered the same error 
twice in three days now. There are some MOD operations and then I get 
a line like this in the errors-log:


[23/Aug/2023:13:27:17.971884067 +0200] - ERR - ldbm_back_seq - 
deadlock retry BAD 1601, err=0 Unexpected dbimpl error code


After this the server keeps running, systemctl status says everything 
is fine, but new incoming connections are failing with timeouts.


Any advice would be welcome.

Thanks in advance
Julian Kippels

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: nsslapd-referral remove issues

2023-07-31 Thread Thierry Bordaz

Hi,


My understanding is that you may try to remove a value (looking like a 
referral) from a replication configuration entry (under 
cn=config/dse.ldif). Could you cut/paste a sample of the entry you want 
to update and the value you want to get rid ?


If you are trying to remove a server from the topology, you may have a 
look at the doc [1]



[1]https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/removing_a_directory_server_instance_from_the_replication_topology#removing_a_consumer_or_hub_from_the_replication_topology


regards
thierry

On 7/28/23 21:52, Ghiurea, Isabella wrote:


Hi List

we are running the following  389-DS version :

389-ds-base-libs-1.3.10.2-16.el7_9.x86_64.
I  need to remove a referral entry from dse.ldif , there are 2 servers 
cfg  in master to master replication  with two slaves each master.
I tried removing referrals from one of the master but after  
ldap server restart the referral was added and same for the entries in 
replication agreement.
Do I need to disable the replication plugging  with DS online to be 
able to remove referrals and also  the  old entries  from replication 
agreement ?


Thank  you
Isabella



___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Crash with SEGV after compacting

2023-07-12 Thread Thierry Bordaz


On 7/12/23 10:14, Mathieu Baudier wrote:

Hello,

Many thanks for the quick analysis!


The crash is already fixed in 1.4.4 with
https://github.com/389ds/389-ds-base/issues/4778
The fix was about scheduling of compaction but revisit this part of code
and actually fixed this crash.

I will raise the issue with Debian and see whether they would be willing to 
update the package in Debian 11 / bullseye.
(https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1029040 is indirectly 
related, which means that the OP and us are not the only ones affected)

My understanding is that updating 389-ds-base to v1.4.4.19 (the latest tag I 
could find on github, 
https://github.com/389ds/389-ds-base/tree/389-ds-base-1.4.4.19) would contain 
this fix, right ?


Yes it is fixed in 389-ds-base-1.4.4.16





I fully agree with Mark suggestion to move to 2.x as this branch is not
maintained except for few fixes like this one.

Yes, definitely. Debian 12 / bookworm is providing v2.3.1 and has recently 
become the new Debian stable. But upgrading obviously impacts other packages 
and we are still on RHEL 8 (and therefore 389-ds v1.4.3.*) for the IPA servers. 
While we have already successfully tested Debian 12 + RHEL 9 with our 
application, it did raise some unrelated issues with the IPA client (which are 
now solved in Debian 12 and on their way to RHEL 8 and 9).

So, we still have to live with the current setup for a few months, and if we 
don't get this fixed in Debian 11, I am thinking of what kind of workaround we 
could put in place, and I would be grateful for a quick feedback from your side 
:

- Is it reasonable to configure a very long delay for the compacting?
- Or should we rather rather restart dirsrv periodically in order to reset the 
next delay?

I appreciate that neither is ideal, but this are small instances which can be 
down from time to time (typically at night) and a restart of the LDAP server is 
transparent for the application, as long as no one is trying to log in while it 
is down.



If you need to find a temporary workaround, I think promoting the server 
to be a supplier could be a workaround. Drawback is that it will store 
changes in changelog but if there not a lot of changes it has no disk 
impact neither significant response time impact.




Thanks again for your help,

Mathieu
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Crash with SEGV after compacting

2023-07-12 Thread Thierry Bordaz

Hi,

The crash is already fixed in 1.4.4 with 
https://github.com/389ds/389-ds-base/issues/4778
The fix was about scheduling of compaction but revisit this part of code 
and actually fixed this crash.


I fully agree with Mark suggestion to move to 2.x as this branch is not 
maintained except for few fixes like this one.


best regards
thierry

On 7/12/23 08:31, Mathieu Baudier wrote:

Hello,

please find a fresh backtrace below.
Is it more usable?
I can also send it directly to you per email, if more practical.

GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
 .

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/ns-slapd...
Reading symbols from 
/usr/lib/debug/.build-id/0a/598ea0bea9a19f0b1b8a501f3a274de07ebe32.debug...

warning: Can't open file /dev/shm/t4xNpb (deleted) during file-backed mapping 
note processing
[New LWP 300787]
[New LWP 300801]
[New LWP 300790]
[New LWP 300785]
[New LWP 300793]
[New LWP 300791]
[New LWP 300796]
[New LWP 300792]
[New LWP 300802]
[New LWP 300803]
[New LWP 300794]
[New LWP 300795]
[New LWP 300811]
[New LWP 300798]
[New LWP 300805]
[New LWP 300809]
[New LWP 300800]
[New LWP 300807]
[New LWP 300806]
[New LWP 300788]
[New LWP 300804]
[New LWP 300808]
[New LWP 300799]
[New LWP 300786]
[New LWP 300810]
[New LWP 300797]
[New LWP 300789]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/ns-slapd -D /etc/dirsrv/slapd-argeo -i 
/run/dirsrv/slapd-argeo.pid'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7f25f88f1c9c in bdb_db_compact_one_db (db=0x0, inst=0x7f25eec65ac4, 
inst@entry=0x55a07ecf9fc0)
 at ../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:2485
2485../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c: No such file or 
directory.
[Current thread is 1 (Thread 0x7f25eec67700 (LWP 300787))]

...


Thread 1 (Thread 0x7f25eec67700 (LWP 300787)):
#0  0x7f25f88f1c9c in bdb_db_compact_one_db (db=0x0, inst=0x7f25eec65ac4, 
inst@entry=0x55a07ecf9fc0) at 
../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:2485
 type = 21920
 rc = 0
 txn = {back_txn_txn = 0x7f25fc15bc48 }
 c_data = {compact_fillpercent = 0, compact_timeout = 0, compact_pages 
= 0, compact_empty_buckets = 0, compact_pages_free = 0, compact_pages_examine = 
0, compact_levels = 0, compact_deadlock = 0, compact_pages_truncated = 0, 
compact_truncate = 0}
 compact_flags = 
#1  0x7f25f88f44c4 in checkpoint_threadmain (param=0x55a07ecaf3b0) at 
../ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:3825
 rc = 
 inst_obj = 0x55a07ecfafb0
 inst = 0x55a07ecf9fc0
 db = 0x0
 interval = 2500
 rval = 
 li = 0x55a07ecaf3b0
 debug_checkpointing = 0
 home_dir = 
 list = 0x0
 listp = 
 penv = 0x55a07ec0
 checkpoint_expire = {tv_sec = 3547533, tv_nsec = 657553560}
 compactdb_expire = {tv_sec = 6139350, tv_nsec = 997512267}
 compactdb_interval_update = 300
 checkpoint_interval_update = 
 compactdb_interval = 2592000
 checkpoint_interval = 60
 priv = 0x55a07ee01dc0
 pEnv = 0x55a07ec0
#2  0x7f25fc161941 in _pt_root (arg=0x55a07ee0d430) at ptthread.c:201
 rv = 
 thred = 0x55a07ee0d430
 detached = 1
 tid = 300787
#3  0x7f25fc100ea7 in start_thread (arg=) at 
pthread_create.c:477
 ret = 
 pd = 
 unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139800896501504, 
3890383068704389134, 140731200599582, 140731200599583, 139800896498944, 
8396800, -3767622530549827570, -3767662504817477618}, mask_was_saved = 0}}, 
priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, 
canceltype = 0}}}
 not_first_call = 0
#4  0x7f25fbf69a2f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 

[389-users] Re: Crash with SEGV after compacting

2023-07-11 Thread Thierry Bordaz

Hello,

Unfortunately the original backtrace did not contain symbol and I can 
not say if the bug was already fixed


Thread 1 (Thread 0x7f7c42a4e700 (LWP 22323)):
#0 0x7f7c54258c9c in ??? () at 
/usr/lib/x86_64-linux-gnu/dirsrv/plugins/libback-ldbm.so
#1 0x7f7c5425b4c4 in ??? () at 
/usr/lib/x86_64-linux-gnu/dirsrv/plugins/libback-ldbm.so

#2 0x7f7c57ac2941 in ??? () at /lib/x86_64-linux-gnu/libnspr4.so
#3 0x7f7c57a61ea7 in start_thread (arg=) at 
pthread_create.c:477
#4 0x7f7c578caa2f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:95


I recall some bugs were recently fixed related to compaction 
interval/tod but not sure they were related. Could you install the 
debugsource and collect a new backtrace ?


regards
thierry

On 7/11/23 14:18, Mathieu Baudier wrote:

Hello,

thank you for your quick answer!

The OP of this thread has already posted a backtrace:
https://lists.pagure.io/archives/list/389-users@lists.fedoraproject.org/message/JOV45YZSNTY7IN72TGHBVYRFLDFCQNWN/

We use the same version as the OP (1.4.4.11) :

$ sudo apt list 389-ds-base
389-ds-base/oldstable,now 1.4.4.11-2 amd64 [installed]

which is the one provided by Debian 11 / bullseye, and which has not 
changed for a while.


We don't use replication on these instances.

If this backtrace is not sufficient, I am happy to reproduce the steps 
on a dedicated environment.


Dependening on your analysis, we will probably have to notify Debian, 
but I am not sure whether they will want to patch it for Debian 11.
So, maybe I will have to look at Debian 12 / bookworm (the new 'Debian 
stable'), and see whether the issue still occurs.


Cheers,

Mathieu

On Tue, 2023-07-11 at 11:51 +0200, Thierry Bordaz wrote:

Hi,

What version are you running ? Are you running a replicated topology,
what is the crashing server (supplier, consumer, hub) ?

Do you have a backtrace of the crash (with debugsource) ?

Unfortunately I doubt compaction can be disabled (it is part of the
checkpointing that is mandatory). It can be delayed with compaction
interval or timeof day but not suppressed.

best regards
thierry

On 7/11/23 09:51, Mathieu Baudier wrote:

Hello,

we have exactly the same problem (also on Debian 11). 389-ds crashes 
when compacting.


Is there a related bug ticket to track?

Is it possible / advisable to disable compacting? (our instances are 
critical but very small, with only credentials in them)

If yes, how can it be done?

Thanks in advance!

Mathieu
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Crash with SEGV after compacting

2023-07-11 Thread Thierry Bordaz

Hi,

What version are you running ? Are you running a replicated topology, 
what is the crashing server (supplier, consumer, hub) ?


Do you have a backtrace of the crash (with debugsource) ?

Unfortunately I doubt compaction can be disabled (it is part of the 
checkpointing that is mandatory). It can be delayed with compaction 
interval or timeof day but not suppressed.


best regards
thierry

On 7/11/23 09:51, Mathieu Baudier wrote:

Hello,

we have exactly the same problem (also on Debian 11). 389-ds crashes when 
compacting.

Is there a related bug ticket to track?

Is it possible / advisable to disable compacting? (our instances are critical 
but very small, with only credentials in them)
If yes, how can it be done?

Thanks in advance!

Mathieu
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389 Ldap Cleanallruv Replica Crash

2023-05-03 Thread Thierry Bordaz

Hi Juan,


Thanks for raising this issue. The crash can be reproduced and I opened 
https://github.com/389ds/389-ds-base/issues/5751


It is a side effect of a CL refactoring done in 2.x branch.


best regards
thierry


On 5/2/23 21:00, Juan Quintanilla wrote:

Hi,

I recently installed 389-ds-base-libs-2.2.6-2.el8.x86_64 and 
389-ds-base-2.2.6-2.el8.x86_64 on an ALma Linux 8 Server, but I'm 
encountering an issue with removing offline replicas from our existing 
389 Ldap.


When the command below is executed on one of the suppliers:

dsconf INSTANCE_NAME repl-tasks cleanallruv --suffix 
"ou=sample,dc=test,dc=dom" --replica-id 20 --force-cleaning


The entry is removed from the ldap supplier, and when the change is 
sent to the secondary supplier it is also removed with no problem.  
The issue is when the change is sent to the consumer, the slapd 
process will instantly crash.  When the consumer instance is brought 
back up the entry that needed to be removed is gone.


Has anyone encountered a similar issue with the consumers crashing 
during a cleanallruv request or cleanruv?


I also tried running a cleanruv task on each server, suppliers have no 
issue. When the command is run on the readonly consumers the slapd 
process crashes.


ldapmodify -x -D "cn=manager" -W 

[389-users] Re: 389 DS memory growth

2023-04-27 Thread Thierry Bordaz

Hi Alexander,

Memory footprint is larger with valgrind. You may reduce cache tuning to 
let your instance fit on the server. Disable autoh-tune 
(nsslapd-cache-autosize=0) and reduce nsslapd-cachememsize and 
nsslapd-dbcachesize.


regards
Thierry

On 4/27/23 02:05, Nazarenko, Alexander wrote:


Hi, Theirry,

We have followed the directions to investigate a memory leak which 
occurs as we reach many hundreds of thousands of entries, and got the 
log file, which shows some warnings:


==26735== Warning: set address range perms: large range [0x59eaf000, 
0xb3067000) (defined)


==26735== Thread 18:

==26735== Conditional jump or move depends on uninitialized value(s)

…

==26735== Thread 30:

==26735== Syscall param pwrite64(buf) points to uninitialised byte(s)

==26735== at 0x7998023: ??? (in /usr/lib64/libpthread-2.17.so)

and similar strings. Memory was running to 90% of 16G servers, and 
never comes back.


Thoughts, suggestions are much appreciated.

- Alex

*From: *Thierry Bordaz 
*Date: *Tuesday, April 18, 2023 at 12:37 PM
*To: *"Nazarenko, Alexander" , 
"General discussion list for the 389 Directory server project." 
<389-users@lists.fedoraproject.org>

*Subject: *Re: [389-users] 389 DS memory growth

Thanks for the update.

I failed to reproduced any significant growth with 
groups(100)/members(1000) provisioning. The same with searches on 
person returning 1000 person entries (bound as DM). We will wait for 
your profiling info.


regards
Theirry

On 4/18/23 18:12, Nazarenko, Alexander wrote:

This is understood, thank you. It is not a big concern for us, as
our servers are at least 16Gb.

We are not using pbkdf2 either.

This is the heap growth above 20Gb (and up) that is the concern,
due to queries like (objectclass=person) hiting the server.

At some point in the near future we plan profile a typical server
for memory usage, and plan to keep posted.

    *- Alwes*

*From: *Thierry Bordaz 
<mailto:tbor...@redhat.com>
*Date: *Tuesday, April 18, 2023 at 11:47 AM
*To: *"General discussion list for the 389 Directory server
project." <389-users@lists.fedoraproject.org>
<mailto:389-users@lists.fedoraproject.org>, "Nazarenko, Alexander"

<mailto:alexander_nazare...@harvard.edu>
*Subject: *Re: [389-users] 389 DS memory growth

Hi,

Note that the initial memory footprint of an instance 1.3.11 is
larger that an 1.3.10 one.

On RHEL 7.9 2Gb VM, an instance 1.3.11 is 1Gb while 1.3.10 is 0.5Gb.
Instances have the same DS tuning.

The difference comes from extra chunks of anonymous memory (heap)
that are possibly related to the new rust plugin handling
pbkdf2_sha512.

7ffb0812e000 64328   0   0 -   [ anon ]
7ffb0c00 1204    1060    1060 rw---   [ anon ]
7ffb0c12d000 64332   0   0 -   [ anon ]
7ffb1000 1028    1028    1028 rw---   [ anon ]
7ffb10101000 64508   0   0 -   [ anon ]
7ffb1400 1020    1020    1020 rw---   [ anon ]
7ffb140ff000 64516   0   0 -   [ anon ]
7ffb1800 1024    1024    1024 rw---   [ anon ]
7ffb1810 64512   0   0 -   [ anon ]
7ffb1c00 1044    1044    1044 rw---   [ anon ]
7ffb1c105000 64492   0   0 -   [ anon ]
7ffb2000 540 540 540 rw---   [ anon ]
7ffb20087000 64996   0   0 -   [ anon ]
7ffb271ce000 4   0   0 -   [ anon ]

This is just the initial memory footprint and does not explain
regular growth.
Thanks to progier who raised that point.

regards
thierry

On 4/17/23 03:07, Nazarenko, Alexander wrote:

Hello colleagues,

On March 22nd we updated the 389-ds-base.x86_64 and
389-ds-base-libs.x86_64 packages on our eight RHEL 7.9
production servers from version 1.3.10.2-17.el7_9 to version
1.3.11.1-1.el7_9.  We also updated the kernel from kernel
3.10.0-1160.80.1.el7.x86_64 to
kernel-3.10.0-1160.88.1.el7.x86_64 during the same update.

Approximately 12 days later, on April 3rd, all the hosts
started exhibiting memory growth issues whereby the “slapd”
process was using over 90% of the available system memory of
32GB, which was NOT happening for a couple of years prior to
applying any of the available package updates on the systems.

Two of the eight hosts act as Primaries (formerly referred to
as masters), while 6 of the hosts act as read-only replicas. 
Three of the read-only replicas are used by our authorization
system while the other three read-only replicas are used by
customer-based applications.

Currently we use system controls to restrict the memory usage.

My question is whether this is something tha

[389-users] Re: A more profound replication monitoring of 389-ds instance

2023-04-21 Thread Thierry Bordaz

Hi,

I agree that it is complex task to master such FreeIPA deployment. 
FreeIPA enables many components, 389ds is just one of them, and several 
of them could contribute when a problem occurs. My main concern here is 
that you express a need to monitor (how well FreeIPA deployment works) 
rather than pointing a clear misbehavior of the topology that we could 
focus on.


Replication is an important functionality of 389ds/FreeIPA and 
monitoring replication is a common demand. One common demand is to 
monitor replication lag (how much time it takes for the topology to 
converge: an update is replicated on all replicas). There are several 
ways to monitor that but I think an easy way is to rely on dirsrv access 
logs. Each replicated update is identified uniquely with its 'csn' and 
you can find csn like 'csn=57eb7dbc0060'. Then grepping this 
value among all access logs on all instances, could give you an 
indication of lag. Lag could typically be in the range of seconds or 1-2 
minutes, if it spikes to many minutes or never hit some replica then you 
could start investigating why it is slow.
A difficulty with that procedure is that some updates are not replicated 
(fractional replication).
Investigation in replication are quite complex and difficult to explain 
in general. This forum is a good place to get answers to specific questions.


best regards
Thierry

On 4/20/23 16:10, dweller dweller wrote:

Yes, I'll try to explain my needs more clearly. As it happens a lot I recently 
inherited a FreeIPA installation and am now responsible for managing the 
service. As someone who was not previously familiar with FreeIPA, I am in the 
process of building my expertise in managing it.
When I started the monitoring setup was represented with node_exporter, 
process_exporter for the host and 389ds_exporter 
(https://github.com/terrycain/389ds_exporter) for the ldap data. However, as 
the FreeIPA installation grew in size, we started encountering issues and 
realized that we lacked critical information to pinpoint the root causes of 
these problems. To address this, I have taken steps to improve the monitoring 
setup. I have started monitoring FreeIPA's bind service using a 
separate_exporter and exporting DNS queries to opensearch. Additionally, I have 
rewritten the 389ds_exporter to include cn=monitor metrics to provide more 
visibility into the 389 Directory Server.

I recently realized that I could also include 'cn=ldbm database' metrics, which 
are low-level but could be useful in troubleshooting the issues we are facing. 
The problems we are encountering are related to disk IO, and having these 
metrics could provide valuable insights into the following:

1) Excessive paging out and increased swap usage without spikes in load. For 
example after restarting of replica the swap usage increases to 30% (of 3GB 
swap space) over 1-2 days while there are at least 4GB of availiable RAM 
present on the host. And the general swap consumer is ns-slapd service. For now 
I only tested to configure swappiness parametr to zero, which did not help, so 
I guess there are some other factors involved.

2) Spikes in IO latency observed during modifying and adding operations, which were not present 
when the cluster was smaller (up to 10 replicas). I need to determine whether the issue lies with 
service tuning or with the cloud provider and its SAN, as we recently migrated to SSD disks without 
improvement. As I said about "replication lag" those problems just started more appearing 
as new replicas were added, but for now we mostly observe it by outage of services that rely on 
ldap. The "waves" refers to the way problem apprear, as different clients VDCs are having 
problems one after the other which is looks like replication propagation.

3) Master-master replication just seems to me as a big "black cloud", which I 
have no control or knowledge of. When you have couple of hosts it is maybe fine to rely 
on documented way of looking up replicationStatus attribute, but when you have couple of 
dozens I guess things could get quite not so straitforward, at least relying on intuition 
suggests it. When I say about replication observability what I mean and what I'd like to 
see is following:

Graph representation...

- ...of time it took to replay a change (or I guess time of full replication 
session)
- ...of the amount simutanialous connections that Suppliers trying to establish 
with Consumer
- ...of time spent waiting to acquire replica access

I just pointed a few of the top of my head. I don't know for sure (and first 
post was about it) is it really worth it to try and get those kind of metrics 
or I just don't know what I'm talking about and it would be a waste of time and 
hard to implement. As I mentioned bpf cause I see it as only option I could get 
it, the other option is to parse logs that are in DEBUG mode which is not the 
option.

With replication metrics besides the ability to see its impact on the 

[389-users] Re: A more profound replication monitoring of 389-ds instance

2023-04-21 Thread Thierry Bordaz

Hi,

I agree that it is complex task to master such FreeIPA deployment. 
FreeIPA enables many components, 389ds is just one of them, and several 
of them could contribute when a problem occurs. My main concern here is 
that you express a need to monitor (how well FreeIPA deployment works) 
rather than pointing a clear misbehavior of the topology that we could 
focus on.


Replication is an important functionality of 389ds/FreeIPA and 
monitoring replication is a common demand. One common demand is to 
monitor replication lag (how much time it takes for the topology to 
converge: an update is replicated on all replicas). There are several 
ways to monitor that but I think an easy way is to rely on dirsrv access 
logs. Each replicated update is identified uniquely with its 'csn' and 
you can find csn like 'csn=57eb7dbc0060'. Then grepping this 
value among all access logs on all instances, could give you an 
indication of lag. Lag could typically be in the range of seconds or 1-2 
minutes, if it spikes to many minutes or never hit some replica then you 
could start investigating why it is slow.
A difficulty with that procedure is that some updates are not replicated 
(fractional replication).
Investigation in replication are quite complex and difficult to explain 
in general. This forum is a good place to get answers to specific questions.


best regards
Thierry

On 4/20/23 16:10, dweller dweller wrote:

Yes, I'll try to explain my needs more clearly. As it happens a lot I recently 
inherited a FreeIPA installation and am now responsible for managing the 
service. As someone who was not previously familiar with FreeIPA, I am in the 
process of building my expertise in managing it.
When I started the monitoring setup was represented with node_exporter, 
process_exporter for the host and 389ds_exporter 
(https://github.com/terrycain/389ds_exporter) for the ldap data. However, as 
the FreeIPA installation grew in size, we started encountering issues and 
realized that we lacked critical information to pinpoint the root causes of 
these problems. To address this, I have taken steps to improve the monitoring 
setup. I have started monitoring FreeIPA's bind service using a 
separate_exporter and exporting DNS queries to opensearch. Additionally, I have 
rewritten the 389ds_exporter to include cn=monitor metrics to provide more 
visibility into the 389 Directory Server.

I recently realized that I could also include 'cn=ldbm database' metrics, which 
are low-level but could be useful in troubleshooting the issues we are facing. 
The problems we are encountering are related to disk IO, and having these 
metrics could provide valuable insights into the following:

1) Excessive paging out and increased swap usage without spikes in load. For 
example after restarting of replica the swap usage increases to 30% (of 3GB 
swap space) over 1-2 days while there are at least 4GB of availiable RAM 
present on the host. And the general swap consumer is ns-slapd service. For now 
I only tested to configure swappiness parametr to zero, which did not help, so 
I guess there are some other factors involved.

2) Spikes in IO latency observed during modifying and adding operations, which were not present 
when the cluster was smaller (up to 10 replicas). I need to determine whether the issue lies with 
service tuning or with the cloud provider and its SAN, as we recently migrated to SSD disks without 
improvement. As I said about "replication lag" those problems just started more appearing 
as new replicas were added, but for now we mostly observe it by outage of services that rely on 
ldap. The "waves" refers to the way problem apprear, as different clients VDCs are having 
problems one after the other which is looks like replication propagation.

3) Master-master replication just seems to me as a big "black cloud", which I 
have no control or knowledge of. When you have couple of hosts it is maybe fine to rely 
on documented way of looking up replicationStatus attribute, but when you have couple of 
dozens I guess things could get quite not so straitforward, at least relying on intuition 
suggests it. When I say about replication observability what I mean and what I'd like to 
see is following:

Graph representation...

- ...of time it took to replay a change (or I guess time of full replication 
session)
- ...of the amount simutanialous connections that Suppliers trying to establish 
with Consumer
- ...of time spent waiting to acquire replica access

I just pointed a few of the top of my head. I don't know for sure (and first 
post was about it) is it really worth it to try and get those kind of metrics 
or I just don't know what I'm talking about and it would be a waste of time and 
hard to implement. As I mentioned bpf cause I see it as only option I could get 
it, the other option is to parse logs that are in DEBUG mode which is not the 
option.

With replication metrics besides the ability to see its impact on the 

[389-users] Re: A more profound replication monitoring of 389-ds instance

2023-04-20 Thread Thierry Bordaz

Hi,

I read your first post. I found it very interesting but was not able to 
get a clear understanding of your needs. This second post would also 
need additional details. The cn=ldbm database monitoring will mainly 
return stats about DB activity and are IMHO a bit raw data.


In the posts you also mentioned, replication lag and "waves" of 
replication session. More specifically what  concerns are you trying to 
fix ? Do you want to monitor, for reporting, to anticipate with a 
corrective actions ?


best regards
thierry

On 4/20/23 05:11, dweller dweller wrote:

I wrote a really long post but now looking at it again and searching for 
monitoring problems in mailing list it seems kind of vague and unrelated.
I found that records under cn=ldbm database,cn=plugins,cn=config expose 
database metrics (I only knew about metrics under cn=monitor). So I will try to 
expose them to monitoring system and see what I could find during high load 
tasks.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389 DS memory growth

2023-04-18 Thread Thierry Bordaz

Thanks for the update.

I failed to reproduced any significant growth with 
groups(100)/members(1000) provisioning. The same with searches on person 
returning 1000 person entries (bound as DM). We will wait for your 
profiling info.


regards
Theirry

On 4/18/23 18:12, Nazarenko, Alexander wrote:


This is understood, thank you. It is not a big concern for us, as our 
servers are at least 16Gb.


We are not using pbkdf2 either.

This is the heap growth above 20Gb (and up) that is the concern, due 
to queries like (objectclass=person) hiting the server.


At some point in the near future we plan profile a typical server for 
memory usage, and plan to keep posted.


*- Alwes*

*From: *Thierry Bordaz 
*Date: *Tuesday, April 18, 2023 at 11:47 AM
*To: *"General discussion list for the 389 Directory server project." 
<389-users@lists.fedoraproject.org>, "Nazarenko, Alexander" 


*Subject: *Re: [389-users] 389 DS memory growth

Hi,

Note that the initial memory footprint of an instance 1.3.11 is larger 
that an 1.3.10 one.


On RHEL 7.9 2Gb VM, an instance 1.3.11 is 1Gb while 1.3.10 is 0.5Gb.
Instances have the same DS tuning.

The difference comes from extra chunks of anonymous memory (heap) that 
are possibly related to the new rust plugin handling pbkdf2_sha512.


7ffb0812e000 64328   0   0 -   [ anon ]
7ffb0c00 1204    1060    1060 rw---   [ anon ]
7ffb0c12d000 64332   0   0 -   [ anon ]
7ffb1000 1028    1028    1028 rw---   [ anon ]
7ffb10101000 64508   0   0 -   [ anon ]
7ffb1400 1020    1020    1020 rw---   [ anon ]
7ffb140ff000 64516   0   0 -   [ anon ]
7ffb1800 1024    1024    1024 rw---   [ anon ]
7ffb1810 64512   0   0 -   [ anon ]
7ffb1c00 1044    1044    1044 rw---   [ anon ]
7ffb1c105000 64492   0   0 -   [ anon ]
7ffb2000 540 540 540 rw---   [ anon ]
7ffb20087000 64996   0   0 -   [ anon ]
7ffb271ce000 4   0   0 -   [ anon ]

This is just the initial memory footprint and does not explain regular 
growth.

Thanks to progier who raised that point.

regards
thierry

On 4/17/23 03:07, Nazarenko, Alexander wrote:

Hello colleagues,

On March 22nd we updated the 389-ds-base.x86_64 and
389-ds-base-libs.x86_64 packages on our eight RHEL 7.9 production
servers from version 1.3.10.2-17.el7_9 to version
1.3.11.1-1.el7_9.  We also updated the kernel from kernel
3.10.0-1160.80.1.el7.x86_64 to kernel-3.10.0-1160.88.1.el7.x86_64
during the same update.

Approximately 12 days later, on April 3rd, all the hosts started
exhibiting memory growth issues whereby the “slapd” process was
using over 90% of the available system memory of 32GB, which was
NOT happening for a couple of years prior to applying any of the
available package updates on the systems.

Two of the eight hosts act as Primaries (formerly referred to as
masters), while 6 of the hosts act as read-only replicas.  Three
of the read-only replicas are used by our authorization system
while the other three read-only replicas are used by
customer-based applications.

Currently we use system controls to restrict the memory usage.

My question is whether this is something that other users also
experience, and what is the recommended way to stabilize the DS
servers in this type of situation?

Thanks,

- Alex



___

389-users mailing list --389-users@lists.fedoraproject.org

To unsubscribe send an email to389-users-le...@lists.fedoraproject.org

Fedora Code of Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/  
<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.fedoraproject.org_en-2DUS_project_code-2Dof-2Dconduct_=DwMDaQ=WO-RGvefibhHBZq3fL85hQ=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B=vA-Y0-9EUfql9PZclFkEUQNYQWbXzkK_wckNoCsb-Rs=>

List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines  
<https://urldefense.proofpoint.com/v2/url?u=https-3A__fedoraproject.org_wiki_Mailing-5Flist-5Fguidelines=DwMDaQ=WO-RGvefibhHBZq3fL85hQ=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B=2Q_fo7HDWuMCaznM6X-ZFtOxykQo_10PZVZX4FGds3k=>

List Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org  
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.fedoraproject.org_archives_list_389-2Dusers-40lists.fedoraproject.org=DwMDaQ=WO-RGvefibhHBZq3fL85hQ=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B=IAJEMJXS6iY8Zy8s59ThNT9cCsKLvelobO7aDShm-EY=>

Do not reply to spam, report it:https://pagure.io/fedora-infrastructure/new_issue  

[389-users] Re: 389 DS memory growth

2023-04-18 Thread Thierry Bordaz

Hi,

Note that the initial memory footprint of an instance 1.3.11 is larger 
that an 1.3.10 one.


On RHEL 7.9 2Gb VM, an instance 1.3.11 is 1Gb while 1.3.10 is 0.5Gb.
Instances have the same DS tuning.

The difference comes from extra chunks of anonymous memory (heap) that 
are possibly related to the new rust plugin handling pbkdf2_sha512.


7ffb0812e000   64328   0   0 -   [ anon ]
7ffb0c00    1204    1060    1060 rw---   [ anon ]
7ffb0c12d000   64332   0   0 -   [ anon ]
7ffb1000    1028    1028    1028 rw---   [ anon ]
7ffb10101000   64508   0   0 -   [ anon ]
7ffb1400    1020    1020    1020 rw---   [ anon ]
7ffb140ff000   64516   0   0 -   [ anon ]
7ffb1800    1024    1024    1024 rw---   [ anon ]
7ffb1810   64512   0   0 -   [ anon ]
7ffb1c00    1044    1044    1044 rw---   [ anon ]
7ffb1c105000   64492   0   0 -   [ anon ]
7ffb2000 540 540 540 rw---   [ anon ]
7ffb20087000   64996   0   0 -   [ anon ]
7ffb271ce000   4   0   0 -   [ anon ]

This is just the initial memory footprint and does not explain regular 
growth.

Thanks to progier who raised that point.

regards
thierry

On 4/17/23 03:07, Nazarenko, Alexander wrote:


Hello colleagues,

On March 22nd we updated the 389-ds-base.x86_64 and 
389-ds-base-libs.x86_64 packages on our eight RHEL 7.9 production 
servers from version 1.3.10.2-17.el7_9 to version 1.3.11.1-1.el7_9.  
We also updated the kernel from kernel 3.10.0-1160.80.1.el7.x86_64 to 
kernel-3.10.0-1160.88.1.el7.x86_64 during the same update.


Approximately 12 days later, on April 3rd, all the hosts started 
exhibiting memory growth issues whereby the “slapd” process was using 
over 90% of the available system memory of 32GB, which was NOT 
happening for a couple of years prior to applying any of the available 
package updates on the systems.


Two of the eight hosts act as Primaries (formerly referred to as 
masters), while 6 of the hosts act as read-only replicas. Three of the 
read-only replicas are used by our authorization system while the 
other three read-only replicas are used by customer-based applications.


Currently we use system controls to restrict the memory usage.

My question is whether this is something that other users also 
experience, and what is the recommended way to stabilize the DS 
servers in this type of situation?


Thanks,

- Alex


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389 DS memory growth

2023-04-17 Thread Thierry Bordaz

Hi,

I did some rapid tests around password update  and memory consumption 
was stable.

Did you identify what kind of operation that triggered the growth.
You may use [1] to setup the instance with valgrind

[1] 
https://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%20Memory%20Growth/Invalid%20Access%20with%C2%A0Valgrind


regards
thierry

On 4/17/23 09:35, Thierry Bordaz wrote:


Hi,

Thanks for raising this issue. Actually the version is an upgrade of 
389 7.9.18 to 7.9.21. It contains only 3 bug fixes


 - 5497: boolean attribute should be case insensitive
 - 5440: memberof can be slow when multiple membership attribute are 
defined

 - 5565: support of PBKDF2-SHA512 in 7.9

The usual option would be to use valgrind to debug the leak. Because 
of the limited list of bug we can also try to eliminate candidate. I 
think the first one looks safe. For 5440, do you use memberof and with 
how many membership attributes. For 5565, what is your default 
password storage scheme ? if it is PBKDF2-SHA512, could you set it to 
PBKDF2-SHA256 and monitor memory consumption ?


[1] https://github.com/389ds/389-ds-base/issues/5497
[2] https://github.com/389ds/389-ds-base/issues/5440
[3] https://github.com/389ds/389-ds-base/issues/5565


best regards
thierry

On 4/17/23 05:38, Casey Feskens wrote:


We’ve been experiencing similar memory growth. I’ve had to quadruple 
RAM on our ldap hosts, but things seem stable there. Still unsure 
what the cause is. Glad to hear at least that someone else is seeing 
the same issue, so I can perhaps rule out an environmental change.



On Sun, Apr 16, 2023 at 6:07 PM Nazarenko, Alexander 
 wrote:


Hello colleagues,

On March 22nd we updated the 389-ds-base.x86_64 and
389-ds-base-libs.x86_64 packages on our eight RHEL 7.9 production
servers from version 1.3.10.2-17.el7_9 to version
1.3.11.1-1.el7_9.  We also updated the kernel from kernel
3.10.0-1160.80.1.el7.x86_64 to kernel-3.10.0-1160.88.1.el7.x86_64
during the same update.

Approximately 12 days later, on April 3rd, all the hosts started
exhibiting memory growth issues whereby the “slapd” process was
using over 90% of the available system memory of 32GB, which was
NOT happening for a couple of years prior to applying any of the
available package updates on the systems.

Two of the eight hosts act as Primaries (formerly referred to as
masters), while 6 of the hosts act as read-only replicas.  Three
of the read-only replicas are used by our authorization system
while the other three read-only replicas are used by
customer-based applications.

Currently we use system controls to restrict the memory usage.

My question is whether this is something that other users also
experience, and what is the recommended way to stabilize the DS
servers in this type of situation?

Thanks,

- Alex

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue

--
-
Casey Feskens 
Director of Infrastructure Services
Willamette Integrated Technology Services
Willamette University, Salem, OR
Phone:  (503) 370-6950
-

___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389 DS memory growth

2023-04-17 Thread Thierry Bordaz

Hi,

Thanks for raising this issue. Actually the version is an upgrade of 389 
7.9.18 to 7.9.21. It contains only 3 bug fixes


 - 5497: boolean attribute should be case insensitive
 - 5440: memberof can be slow when multiple membership attribute are 
defined

 - 5565: support of PBKDF2-SHA512 in 7.9

The usual option would be to use valgrind to debug the leak. Because of 
the limited list of bug we can also try to eliminate candidate. I think 
the first one looks safe. For 5440, do you use memberof and with how 
many membership attributes. For 5565, what is your default password 
storage scheme ? if it is PBKDF2-SHA512, could you set it to 
PBKDF2-SHA256 and monitor memory consumption ?


[1] https://github.com/389ds/389-ds-base/issues/5497
[2] https://github.com/389ds/389-ds-base/issues/5440
[3] https://github.com/389ds/389-ds-base/issues/5565


best regards
thierry

On 4/17/23 05:38, Casey Feskens wrote:


We’ve been experiencing similar memory growth. I’ve had to quadruple 
RAM on our ldap hosts, but things seem stable there. Still unsure what 
the cause is. Glad to hear at least that someone else is seeing the 
same issue, so I can perhaps rule out an environmental change.



On Sun, Apr 16, 2023 at 6:07 PM Nazarenko, Alexander 
 wrote:


Hello colleagues,

On March 22nd we updated the 389-ds-base.x86_64 and
389-ds-base-libs.x86_64 packages on our eight RHEL 7.9 production
servers from version 1.3.10.2-17.el7_9 to version
1.3.11.1-1.el7_9.  We also updated the kernel from kernel
3.10.0-1160.80.1.el7.x86_64 to kernel-3.10.0-1160.88.1.el7.x86_64
during the same update.

Approximately 12 days later, on April 3rd, all the hosts started
exhibiting memory growth issues whereby the “slapd” process was
using over 90% of the available system memory of 32GB, which was
NOT happening for a couple of years prior to applying any of the
available package updates on the systems.

Two of the eight hosts act as Primaries (formerly referred to as
masters), while 6 of the hosts act as read-only replicas.  Three
of the read-only replicas are used by our authorization system
while the other three read-only replicas are used by
customer-based applications.

Currently we use system controls to restrict the memory usage.

My question is whether this is something that other users also
experience, and what is the recommended way to stabilize the DS
servers in this type of situation?

Thanks,

- Alex

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue

--
-
Casey Feskens 
Director of Infrastructure Services
Willamette Integrated Technology Services
Willamette University, Salem, OR
Phone:  (503) 370-6950
-

___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 2.x query performance problem

2023-03-27 Thread Thierry Bordaz

Hi Claas,

Rereading that thread I have a doubt regarding cache priming. The search 
returns ~500 groups. The first lookup of those groups is significantly 
longer because of entry cache priming.
Could you confirm that if you do twice the same search (1.4 and 2.x), 
the second search in 1.4 is much faster that the second search on 2.x ?


best regards
thierry

On 3/16/23 09:38, Claas Vieler wrote:

Hello William
I cant see any difference expect duration
best regards
Claas
389-Directory/2.3.2 B2023.073.0958
[16/Mar/2023:08:24:51.321404978 +0100] conn=51 fd=66 slot=66 
connection from local to /run/slapd-389ds.socket

[16/Mar/2023:08:24:51.323985845 +0100] conn=51 AUTOBIND dn="cn=root"
[16/Mar/2023:08:24:51.325995690 +0100] conn=51 op=0 BIND dn="cn=root" 
method=sasl version=3 mech=EXTERNAL
[16/Mar/2023:08:24:51.328098136 +0100] conn=51 op=0 RESULT err=0 
tag=97 nentries=0 wtime=0.82030 optime=0.004197632 
etime=0.004276581 dn="cn=root"
[16/Mar/2023:08:24:51.328272655 +0100] conn=51 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=cn=testuser1,ou=People,dc=example,dc=com)" 
attrs="distinguishedName"
[16/Mar/2023:08:24:52.285988416 +0100] conn=51 op=1 RESULT err=0 
tag=101 nentries=532 wtime=0.77055 optime=0.957714945 
etime=0.957784949

[16/Mar/2023:08:24:52.286275743 +0100] conn=51 op=2 UNBIND
[16/Mar/2023:08:24:52.291936625 +0100] conn=51 op=2 fd=66 Disconnect - 
Cleanly Closed Connection - U1

389-Directory/1.4.4.19 B2022.313.1200
[16/Mar/2023:09:10:20.353075132 +0100] conn=101 fd=64 slot=64 
connection from local to /var/lib/dirsrv/slapd-389ds/slapd-389ds.socket

[16/Mar/2023:09:10:20.355714488 +0100] conn=101 AUTOBIND dn="cn=root"
[16/Mar/2023:09:10:20.357681511 +0100] conn=101 op=0 BIND dn="cn=root" 
method=sasl version=3 mech=EXTERNAL
[16/Mar/2023:09:10:20.359700165 +0100] conn=101 op=0 RESULT err=0 
tag=97 nentries=0 wtime=0.36305 optime=0.004064382 
etime=0.004098191 dn="cn=root"
[16/Mar/2023:09:10:20.359896870 +0100] conn=101 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=cn=testuser1,ou=People,dc=example,dc=com)" 
attrs="distinguishedName"
[16/Mar/2023:09:10:20.367652447 +0100] conn=101 op=1 RESULT err=0 
tag=101 nentries=532 wtime=0.77477 optime=0.007755733 
etime=0.007830994

[16/Mar/2023:09:10:20.369055287 +0100] conn=101 op=2 UNBIND
[16/Mar/2023:09:10:20.371940374 +0100] conn=101 op=2 fd=64 closed 
error - U1

*Gesendet:* Mittwoch, 15. März 2023 um 03:41 Uhr
*Von:* "William Brown" 
*An:* "389-users@lists.fedoraproject.org" 
<389-users@lists.fedoraproject.org>

*Betreff:* [389-users] Re: 2.x query performance problem

> got newest version from https://github.com/389ds/389-ds-base dc565fd 
(389-Directory/2.3.2 B2023.073.0958 )

> I can confirm, manageDSAit makes no difference any more in query time,
> got etimes with 0,9 sec after import and reindexing (with and 
without option)
> but a little difference to 1.4.x ist still present :) ( 0.0x sec vs 
0.9 sec)


Can we see the access log between the 1.4.x and 2.x version? There 
still seems to be a difference here which is curious :(



--
Sincerely,

William Brown

Senior Software Engineer,
Identity and Access Management
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 2.x query performance problem

2023-03-23 Thread Thierry Bordaz

Hi Claas,

I was wrong stating that I reproduced locally similar issue. I was not 
using 2.0.17 and was reproducing an issue related to 
nsslapd-idlistscanlimit that is much larger in recent 2.x versions


I tried to mimic "Our environment has about 100k entries, about 15k 
users and about 10k groups. Also big groups with thousand of users, also 
users with thousand of group membership. So I would call it a small 
instance".


I made a given user an uniquemember of 1000 groups. Each group having 
1000 users. Then the search retrieves the 1000 groups DN.


Could you share your dse.ldif (send me directly) ? Also could you 
confirm you see the same perf hit with regular connection bound as 
"cn=directory manager" ?


best regards
thierry


On 3/14/23 14:25, Claas Vieler wrote:

Hallo Thierry,
got newest version from https://github.com/389ds/389-ds-base dc565fd 
<https://github.com/389ds/389-ds-base/commit/dc565fdacbde6e1fd333213d707aa2c5bca9cadf>(389-Directory/2.3.2 
B2023.073.0958 )

I can confirm, manageDSAit makes no difference any more in query time,
got etimes with 0,9 sec after import and reindexing (with and without 
option)
but a little difference to 1.4.x ist still present :)  ( 0.0x sec vs 
0.9 sec)

thanks and best regards
Claas
*Gesendet:* Montag, 13. März 2023 um 17:55 Uhr
*Von:* "Thierry Bordaz" 
*An:* 389-users@lists.fedoraproject.org
*Betreff:* [389-users] Re: 2.x query performance problem

Hi Class,

First, thank you sooo much for your tests. This is really helpful.

So my understanding is that this same req was

  * [10, 30]ms in 1.4
  * [900, 1700]ms in 2.x
  o A possibility is that the filter evaluation (against the 532
returned entry) is the responsible of the 1700ms (without
manageDSAit

In short it looks like there is a significant (>30 times slower) 
regression in RHDS12 vs RHDS11 with that testcase. In RHDS12, the 
handling of referral adds a 2 times slower but it is possibly fixed 
with https://github.com/389ds/389-ds-base/issues/5598.


best regards
thierry

On 3/13/23 17:18, Claas Vieler wrote:

Hello William,
sorry, your mail was stuck in my spam filter, so I doesnt see it
here are the logs with and without option manageDSAit (as Thierry
mentioned)
without manageDSAit:
[13/Mar/2023:16:16:06.583644293 +0100] conn=32 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:06.586619267 +0100] conn=32 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:06.589037720 +0100] conn=32 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:06.591155242 +0100] conn=32 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.78559 optime=0.004658221
etime=0.004734544 dn="cn=root"
[13/Mar/2023:16:16:06.591326840 +0100] conn=32 op=1 SRCH
base="dc=example,dc=com" scope=2
filter="(uniqueMember=cn=testuser,ou=People,dc=example,dc=com)"
attrs="distinguishedName"
[13/Mar/2023:16:16:08.321020181 +0100] conn=32 op=1 RESULT err=0
tag=101 nentries=532 wtime=0.000114773 optime=1.729694222
etime=1.729803880
[13/Mar/2023:16:16:08.321992532 +0100] conn=32 op=2 UNBIND
[13/Mar/2023:16:16:08.327041073 +0100] conn=32 op=2 fd=64 closed
error - U1
with manageDSAit:
[13/Mar/2023:16:16:22.324132867 +0100] conn=33 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:22.326616612 +0100] conn=33 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:22.328594648 +0100] conn=33 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:22.331154393 +0100] conn=33 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.55269 optime=0.004608598
etime=0.004661499 dn="cn=root"
[13/Mar/2023:16:16:22.331366318 +0100] conn=33 op=1 SRCH
base="dc=example,dc=com" scope=2
filter="(uniqueMember=cn=testuser,ou=People,dc=expample,dc=com)"
attrs="distinguishedName"
[13/Mar/2023:16:16:23.244139238 +0100] conn=33 op=2 UNBIND
[13/Mar/2023:16:16:23.24472 +0100] conn=33 op=1 RESULT err=0
tag=101 nentries=532 wtime=0.81512 optime=0.913360154
etime=0.913438519
[1
*Gesendet:* Mittwoch, 08. März 2023 um 01:11 Uhr
*Von:* "William Brown" 
*An:* "389-users@lists.fedoraproject.org"
<389-users@lists.fedoraproject.org>
*Betreff:* [389-users] Re: 2.x query performance problem
>
> Hi Claas,
> I do not recall a specific change 1.4.4 vs 2.0 that could
explain this.
> Do you confirm that 'uniqueMember' is indexed in equality on
both ? What are the SRCH records in the access logs (notes=A ?).
> On 2.0, it lasts 2sec, you may try to capture few pstacks that
would give some tips.
> regards
> thier

[389-users] Re: 2.x query performance problem

2023-03-14 Thread Thierry Bordaz

Hi Class,

Indeed the same task is much faster (28 times) in 1.4.4

[14/Mar/2023:18:57:58.984319394 +0100] conn=4 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=uid=group_entry1-0001,ou=people,dc=example,dc=com)" 
attrs="distinguishedName"
[14/Mar/2023:18:57:59.010935256 +0100] conn=4 op=1 RESULT err=0 tag=101 
nentries=1000 wtime=0.000209520 optime=0.026622409 etime=0.026828069


The reason seems to be the fix 
https://github.com/389ds/389-ds-base/issues/5170 that force filter 
evaluation (even if it can be bypass) on returned entries.
So a side effect of the fix is that when there is a large returned set 
of entries (~500 in your example) the filter evaluation is significant. 
Especially with  attribute with a large valueset (like uniquemember).


The fix prevent to return invalid entries but the performance hit was 
not detected. Need to revisit this part of the fix.


best regards
thierry


On 3/14/23 17:21, Thierry Bordaz wrote:


Hi Claas,

Good, that means that the 2x manageDSAit is now fixed. I tried to 
reproduce locally (2.x) and I think I succeeded:


[14/Mar/2023:16:45:54.283507824 +0100] conn=1 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=uid=group_entry1-0001,ou=people,dc=example,dc=com)" 
attrs="distinguishedName"
[14/Mar/2023:16:45:55.046440071 +0100] conn=1 op=1 RESULT err=0 
tag=101 nentries=1000 wtime=0.000199792 optime=0.762938352 
etime=0.763134856


There is 1000 groups, with each 1000 members so they are large, and 
uid=group_entry1_0001 belongs to all groups. The search last 0.7s that 
is much more than what we had in 1.4 (TBC).


Something surprising is that the server bypass the filter evaluation 
(when returning the entries). So it does not look like the filter 
contribute to the slowness.


best regards
thierry

On 3/14/23 14:25, Claas Vieler wrote:

Hallo Thierry,
got newest version from https://github.com/389ds/389-ds-base dc565fd 
<https://github.com/389ds/389-ds-base/commit/dc565fdacbde6e1fd333213d707aa2c5bca9cadf>(389-Directory/2.3.2 
B2023.073.0958 )

I can confirm, manageDSAit makes no difference any more in query time,
got etimes with 0,9 sec after import and reindexing (with and without 
option)
but a little difference to 1.4.x ist still present :) ( 0.0x sec vs 
0.9 sec)

thanks and best regards
Claas
*Gesendet:* Montag, 13. März 2023 um 17:55 Uhr
*Von:* "Thierry Bordaz" 
*An:* 389-users@lists.fedoraproject.org
*Betreff:* [389-users] Re: 2.x query performance problem

Hi Class,

First, thank you sooo much for your tests. This is really helpful.

So my understanding is that this same req was

  * [10, 30]ms in 1.4
  * [900, 1700]ms in 2.x
  o A possibility is that the filter evaluation (against the 532
returned entry) is the responsible of the 1700ms (without
manageDSAit

In short it looks like there is a significant (>30 times slower) 
regression in RHDS12 vs RHDS11 with that testcase. In RHDS12, the 
handling of referral adds a 2 times slower but it is possibly fixed 
with https://github.com/389ds/389-ds-base/issues/5598.


best regards
thierry

On 3/13/23 17:18, Claas Vieler wrote:

Hello William,
sorry, your mail was stuck in my spam filter, so I doesnt see it
here are the logs with and without option manageDSAit (as Thierry
mentioned)
without manageDSAit:
[13/Mar/2023:16:16:06.583644293 +0100] conn=32 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:06.586619267 +0100] conn=32 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:06.589037720 +0100] conn=32 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:06.591155242 +0100] conn=32 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.78559 optime=0.004658221
etime=0.004734544 dn="cn=root"
[13/Mar/2023:16:16:06.591326840 +0100] conn=32 op=1 SRCH
base="dc=example,dc=com" scope=2
filter="(uniqueMember=cn=testuser,ou=People,dc=example,dc=com)"
attrs="distinguishedName"
[13/Mar/2023:16:16:08.321020181 +0100] conn=32 op=1 RESULT err=0
tag=101 nentries=532 wtime=0.000114773 optime=1.729694222
etime=1.729803880
[13/Mar/2023:16:16:08.321992532 +0100] conn=32 op=2 UNBIND
[13/Mar/2023:16:16:08.327041073 +0100] conn=32 op=2 fd=64 closed
error - U1
with manageDSAit:
[13/Mar/2023:16:16:22.324132867 +0100] conn=33 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:22.326616612 +0100] conn=33 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:22.328594648 +0100] conn=33 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:22.331154393 +0100] conn=33 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.55269 optime=0.004608598
etime=0.004661499 dn

[389-users] Re: 2.x query performance problem

2023-03-14 Thread Thierry Bordaz

Hi Claas,

Good, that means that the 2x manageDSAit is now fixed. I tried to 
reproduce locally (2.x) and I think I succeeded:


[14/Mar/2023:16:45:54.283507824 +0100] conn=1 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=uid=group_entry1-0001,ou=people,dc=example,dc=com)" 
attrs="distinguishedName"
[14/Mar/2023:16:45:55.046440071 +0100] conn=1 op=1 RESULT err=0 tag=101 
nentries=1000 wtime=0.000199792 optime=0.762938352 etime=0.763134856


There is 1000 groups, with each 1000 members so they are large, and 
uid=group_entry1_0001 belongs to all groups. The search last 0.7s that 
is much more than what we had in 1.4 (TBC).


Something surprising is that the server bypass the filter evaluation 
(when returning the entries). So it does not look like the filter 
contribute to the slowness.


best regards
thierry

On 3/14/23 14:25, Claas Vieler wrote:

Hallo Thierry,
got newest version from https://github.com/389ds/389-ds-base dc565fd 
<https://github.com/389ds/389-ds-base/commit/dc565fdacbde6e1fd333213d707aa2c5bca9cadf>(389-Directory/2.3.2 
B2023.073.0958 )

I can confirm, manageDSAit makes no difference any more in query time,
got etimes with 0,9 sec after import and reindexing (with and without 
option)
but a little difference to 1.4.x ist still present :)  ( 0.0x sec vs 
0.9 sec)

thanks and best regards
Claas
*Gesendet:* Montag, 13. März 2023 um 17:55 Uhr
*Von:* "Thierry Bordaz" 
*An:* 389-users@lists.fedoraproject.org
*Betreff:* [389-users] Re: 2.x query performance problem

Hi Class,

First, thank you sooo much for your tests. This is really helpful.

So my understanding is that this same req was

  * [10, 30]ms in 1.4
  * [900, 1700]ms in 2.x
  o A possibility is that the filter evaluation (against the 532
returned entry) is the responsible of the 1700ms (without
manageDSAit

In short it looks like there is a significant (>30 times slower) 
regression in RHDS12 vs RHDS11 with that testcase. In RHDS12, the 
handling of referral adds a 2 times slower but it is possibly fixed 
with https://github.com/389ds/389-ds-base/issues/5598.


best regards
thierry

On 3/13/23 17:18, Claas Vieler wrote:

Hello William,
sorry, your mail was stuck in my spam filter, so I doesnt see it
here are the logs with and without option manageDSAit (as Thierry
mentioned)
without manageDSAit:
[13/Mar/2023:16:16:06.583644293 +0100] conn=32 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:06.586619267 +0100] conn=32 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:06.589037720 +0100] conn=32 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:06.591155242 +0100] conn=32 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.78559 optime=0.004658221
etime=0.004734544 dn="cn=root"
[13/Mar/2023:16:16:06.591326840 +0100] conn=32 op=1 SRCH
base="dc=example,dc=com" scope=2
filter="(uniqueMember=cn=testuser,ou=People,dc=example,dc=com)"
attrs="distinguishedName"
[13/Mar/2023:16:16:08.321020181 +0100] conn=32 op=1 RESULT err=0
tag=101 nentries=532 wtime=0.000114773 optime=1.729694222
etime=1.729803880
[13/Mar/2023:16:16:08.321992532 +0100] conn=32 op=2 UNBIND
[13/Mar/2023:16:16:08.327041073 +0100] conn=32 op=2 fd=64 closed
error - U1
with manageDSAit:
[13/Mar/2023:16:16:22.324132867 +0100] conn=33 fd=64 slot=64
connection from local to
/var/lib/dirsrv/slapd-389ds/slapd-389ds.socket
[13/Mar/2023:16:16:22.326616612 +0100] conn=33 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:22.328594648 +0100] conn=33 op=0 BIND
dn="cn=root" method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:22.331154393 +0100] conn=33 op=0 RESULT err=0
tag=97 nentries=0 wtime=0.55269 optime=0.004608598
etime=0.004661499 dn="cn=root"
[13/Mar/2023:16:16:22.331366318 +0100] conn=33 op=1 SRCH
base="dc=example,dc=com" scope=2
filter="(uniqueMember=cn=testuser,ou=People,dc=expample,dc=com)"
attrs="distinguishedName"
[13/Mar/2023:16:16:23.244139238 +0100] conn=33 op=2 UNBIND
[13/Mar/2023:16:16:23.24472 +0100] conn=33 op=1 RESULT err=0
tag=101 nentries=532 wtime=0.81512 optime=0.913360154
etime=0.913438519
[1
*Gesendet:* Mittwoch, 08. März 2023 um 01:11 Uhr
*Von:* "William Brown" 
*An:* "389-users@lists.fedoraproject.org"
<389-users@lists.fedoraproject.org>
*Betreff:* [389-users] Re: 2.x query performance problem
>
> Hi Claas,
> I do not recall a specific change 1.4.4 vs 2.0 that could
explain this.
> Do you confirm that 'uniqueMember' is indexed in equality on
both ? What are the SRCH records in the access logs (notes=A ?).
> On 2.0, it lasts 

[389-users] Re: 2.x query performance problem

2023-03-13 Thread Thierry Bordaz

Hi Class,

First, thank you sooo much for your tests. This is really helpful.

So my understanding is that this same req was

 * [10, 30]ms in 1.4
 * [900, 1700]ms in 2.x
 o A possibility is that the filter evaluation (against the 532
   returned entry) is the responsible of the 1700ms (without
   manageDSAit

In short it looks like there is a significant (>30 times slower) 
regression in RHDS12 vs RHDS11 with that testcase. In RHDS12, the 
handling of referral adds a 2 times slower but it is possibly fixed with 
https://github.com/389ds/389-ds-base/issues/5598.


best regards
thierry

On 3/13/23 17:18, Claas Vieler wrote:

Hello William,
sorry, your mail was stuck in my spam filter, so I doesnt see it
here are the logs with and without option manageDSAit (as Thierry 
mentioned)

without manageDSAit:
[13/Mar/2023:16:16:06.583644293 +0100] conn=32 fd=64 slot=64 
connection from local to /var/lib/dirsrv/slapd-389ds/slapd-389ds.socket

[13/Mar/2023:16:16:06.586619267 +0100] conn=32 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:06.589037720 +0100] conn=32 op=0 BIND dn="cn=root" 
method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:06.591155242 +0100] conn=32 op=0 RESULT err=0 
tag=97 nentries=0 wtime=0.78559 optime=0.004658221 
etime=0.004734544 dn="cn=root"
[13/Mar/2023:16:16:06.591326840 +0100] conn=32 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=cn=testuser,ou=People,dc=example,dc=com)" 
attrs="distinguishedName"
[13/Mar/2023:16:16:08.321020181 +0100] conn=32 op=1 RESULT err=0 
tag=101 nentries=532 wtime=0.000114773 optime=1.729694222 
etime=1.729803880

[13/Mar/2023:16:16:08.321992532 +0100] conn=32 op=2 UNBIND
[13/Mar/2023:16:16:08.327041073 +0100] conn=32 op=2 fd=64 closed error 
- U1

with manageDSAit:
[13/Mar/2023:16:16:22.324132867 +0100] conn=33 fd=64 slot=64 
connection from local to /var/lib/dirsrv/slapd-389ds/slapd-389ds.socket

[13/Mar/2023:16:16:22.326616612 +0100] conn=33 AUTOBIND dn="cn=root"
[13/Mar/2023:16:16:22.328594648 +0100] conn=33 op=0 BIND dn="cn=root" 
method=sasl version=3 mech=EXTERNAL
[13/Mar/2023:16:16:22.331154393 +0100] conn=33 op=0 RESULT err=0 
tag=97 nentries=0 wtime=0.55269 optime=0.004608598 
etime=0.004661499 dn="cn=root"
[13/Mar/2023:16:16:22.331366318 +0100] conn=33 op=1 SRCH 
base="dc=example,dc=com" scope=2 
filter="(uniqueMember=cn=testuser,ou=People,dc=expample,dc=com)" 
attrs="distinguishedName"

[13/Mar/2023:16:16:23.244139238 +0100] conn=33 op=2 UNBIND
[13/Mar/2023:16:16:23.24472 +0100] conn=33 op=1 RESULT err=0 
tag=101 nentries=532 wtime=0.81512 optime=0.913360154 
etime=0.913438519

[1
*Gesendet:* Mittwoch, 08. März 2023 um 01:11 Uhr
*Von:* "William Brown" 
*An:* "389-users@lists.fedoraproject.org" 
<389-users@lists.fedoraproject.org>

*Betreff:* [389-users] Re: 2.x query performance problem
>
> Hi Claas,
> I do not recall a specific change 1.4.4 vs 2.0 that could explain this.
> Do you confirm that 'uniqueMember' is indexed in equality on both ? 
What are the SRCH records in the access logs (notes=A ?).
> On 2.0, it lasts 2sec, you may try to capture few pstacks that would 
give some tips.

> regards
> thierry

we need to see the exact filter that's being used, as well as the 
access logs lines of the slow query to really help here.


--
Sincerely,

William Brown

Senior Software Engineer,
Identity and Access Management
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Replication agreements creation order

2023-03-13 Thread Thierry Bordaz


On 3/13/23 08:50, Alberto Crescente wrote:


On 3/13/23 01:01, William Brown wrote:

Error log test-389-ds-3
[10/Mar/2023:18:27:29.275950935 +0100] - ERR - 
agmt="cn=agreement-test-389-ds-3-to-test-389-ds-1" 
(test-389-ds-1:636) - clcache_load_buffer - Can't locate CSN 
640b564d0003 in the changelog (DB rc=-12797). If replication 
stops, the consumer may need to be reinitialized.
[10/Mar/2023:18:27:29.277963218 +0100] - ERR - NSMMReplicationPlugin 
- changelog program - repl_plugin_name_cl - 
agmt="cn=agreement-test-389-ds-3-to-test-389-ds-1" 
(test-389-ds-1:636): CSN 640b564d0003 not found, we aren't 
as up to date, or we purged



Is there a method to know the correct sequence of definition of the 
agreements?



Did you do full re-inits when you create from 1 -> 2 and 1 -> 3?


I think so. I executed the following commands sequence:

# On test-389-ds-1
dsconf Sezione -D "cn=Directory Manager" replication enable 
--suffix="dc=test,dc=com" --role="supplier" --replica-id=1 
--bind-dn="cn=replication manager,cn=config" --bind-passwd=""


# On test-389-ds-2
dsconf Sezione -D "cn=Directory Manager" replication enable 
--suffix="dc=test,dc=com" --role="supplier" --replica-id=2 
--bind-dn="cn=replication manager,cn=config" --bind-passwd=""


# On test-389-ds-1 -> test-389-ds-2
dsconf Sezione -D "cn=Directory Manager" repl-agmt create 
--suffix="dc=test,dc=com" --host="test-389-ds-2.pd.infn.it" --port=636 
--conn-protocol=LDAPS --bind-dn="cn=replication manager,cn=config" 
--bind-passwd="" --bind-method=SIMPLE --init 
agreement-test-389-ds-1-to-test-389-ds-2


# On test-389-ds-2 - > test-389-ds-1
dsconf Sezione -D "cn=Directory Manager" repl-agmt create 
--suffix="dc=test,dc=com" --host="test-389-ds-1.pd.infn.it" --port=636 
--conn-protocol=LDAPS --bind-dn="cn=replication manager,cn=config" 
--bind-passwd="" --bind-method=SIMPLE --init 
agreement-test-389-ds-2-to-test-389-ds-1



Hi Alberto,

You should not initialize ds2 from ds1 then ds1 from ds2. You need to 
select your "main" ds (e.g. ds1) then init ds1->ds2 and ds1->ds3. Then 
you are done, they are all (ds1, ds2, ds3) talking about the same object 
(database) and replication  will work.


The consequence of an init (e.g. ds1->ds2) is that it clears the 
changelog of the target (ds2) it is likely why you are seeing those 
alarming message.


best regards
theirry



# On test-389-ds-3
dsconf Sezione -D "cn=Directory Manager" replication enable 
--suffix="dc=test,dc=com" --role="supplier" --replica-id=3 
--bind-dn="cn=replication manager,cn=config" --bind-passwd=""


# On test-389-ds-1 -> test-389-ds-3
dsconf Sezione -D "cn=Directory Manager" repl-agmt create 
--suffix="dc=test,dc=com" --host="test-389-ds-3.pd.infn.it" --port=636 
--conn-protocol=LDAPS --bind-dn="cn=replication manager,cn=config" 
--bind-passwd="" --bind-method=SIMPLE --init 
agreement-test-389-ds-1-to-test-389-ds-3


# On test-389-ds-3 -> test-389-ds-1
dsconf Sezione -D "cn=Directory Manager" repl-agmt create 
--suffix="dc=test,dc=com" --host="test-389-ds-1.pd.infn.it" --port=636 
--conn-protocol=LDAPS --bind-dn="cn=replication manager,cn=config" 
--bind-passwd="" --bind-method=SIMPLE --init 
agreement-test-389-ds-3-to-test-389-ds-1


# On test-389-ds-2 -> test-389-ds-3
dsconf Sezione -D "cn=Directory Manager" repl-agmt create 
--suffix="dc=test,dc=com" --host="test-389-ds-3.pd.infn.it" --port=636 
--conn-protocol=LDAPS --bind-dn="cn=replication manager,cn=config" 
--bind-passwd="" --bind-method=SIMPLE --init 
agreement-test-389-ds-2-to-test-389-ds-3



Regards,

Alberto Crescente.




--
Sincerely,

William Brown

Senior Software Engineer,
Identity and Access Management
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 

[389-users] Re: 2.x query performance problem

2023-03-10 Thread Thierry Bordaz

Hi Class,

pstack is a gdb wrapper command to dump backtrace of all threads. 
installing gdb you may get it.


I suspect that the culprit could be the evaluation of the filter over 
the matching entries (~500 groups owning cn=sampleuser). Using 
ldapsearch could you reproduce with '-e manageDSAit' option and check if 
there is still a diff between 1.4.4 and 2.0.
Another investigation is to put a breakpoint on slapi_vattr_filter_test. 
With such filter it should not be called.


Just for confirmation, you indexed 'uniqueMember' did you indexed in 'eq' ?

best regards
thierry

On 3/10/23 14:47, Claas Vieler wrote:

Hello Thierry,
I can confirm index on 'uniqueMember' for both versions. I also tried 
to recreate and reindex 'uniqueMember', same result.

SRCH-records are inconspicuous, except high optime (no notes..)
What exactly do you want to see in pstacks. Do you mean the output 
from pstack-tool?

regards
Claas
*Gesendet:* Dienstag, 07. März 2023 um 15:38 Uhr
*Von:* "Thierry Bordaz" 
*An:* 389-users@lists.fedoraproject.org
*Betreff:* [389-users] Re: 2.x query performance problem

Hi Claas,

I do not recall a specific change 1.4.4 vs 2.0 that could explain this.

Do you confirm that 'uniqueMember' is indexed in equality on both ? 
What are the SRCH records in the access logs (notes=A ?).
On 2.0, it lasts 2sec, you may try to capture few pstacks that would 
give some tips.


regards
thierry

On 3/7/23 14:54, Claas Vieler wrote:

Hello,
we have a search performance problem when we migrated from
1.4.4.19 to 2.0.17.
Our environment has about 100k entries, about 15k users and about
10k groups. Also big groups with thousand of users, also users
with thousand of group membership. So I would call it a small instance
On 1.4.x query perfomance ist fine:
ldapsearch for
"(uniqueMember=cn=sampleuser,ou=People,dc=example,dc=com) dn " via
LDAPI on 1.4.x takes approx 0,01-0,03 sec.
This user is member of approx. 500 groups.
I tested two migration methods:
1. via replication
After initializing replica, the same query takes about _8_ sec.
So I reindexed db (dsctl .. db2index) and get durations for the
query from 2-3 sec.
2. via ldif export/import
after importing, the same query takes about 2-3 sec
But even with 2-3 sec, we talk about 2.x perfomance ten time
slower than 1.4.x.
Is this a know issue? I compared all cache-settings and found no
differences.
I have no more ideas how to optimize this. Should we wait for 2.x
when its adopted to new lmdb?
thanks
Claas

___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue

___ 389-users mailing list 
-- 389-users@lists.fedoraproject.org To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List 
Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines 
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org 
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 2.x query performance problem

2023-03-07 Thread Thierry Bordaz

Hi Claas,

I do not recall a specific change 1.4.4 vs 2.0 that could explain this.

Do you confirm that 'uniqueMember' is indexed in equality on both ? What 
are the SRCH records in the access logs (notes=A ?).
On 2.0, it lasts 2sec, you may try to capture few pstacks that would 
give some tips.


regards
thierry

On 3/7/23 14:54, Claas Vieler wrote:

Hello,
we have a search performance problem when we migrated from 1.4.4.19 to 
2.0.17.
Our environment has about 100k entries, about 15k users and about 10k 
groups. Also big groups with thousand of users, also users with 
thousand of group membership. So I would call it a small instance

On 1.4.x query perfomance ist fine:
ldapsearch for 
"(uniqueMember=cn=sampleuser,ou=People,dc=example,dc=com) dn " via 
LDAPI on 1.4.x takes approx 0,01-0,03 sec.

This user is member of approx. 500 groups.
I tested two migration methods:
1. via replication
After initializing replica, the same query takes about _8_ sec.
So I reindexed db (dsctl .. db2index) and get durations for the query 
from 2-3 sec.

2. via ldif export/import
after importing, the same query takes about 2-3 sec
But even with 2-3 sec, we talk about 2.x perfomance ten time slower 
than 1.4.x.
Is this a know issue? I compared all cache-settings and found no 
differences.
I have no more ideas how to optimize this. Should we wait for 2.x when 
its adopted to new lmdb?

thanks
Claas

___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Wrong password hash algorithm returned

2022-11-24 Thread Thierry Bordaz
Could it be that the user with upgraded password have non default 
password policy ?


If there are nothing logged in error logs, then an option, as you can 
reproduce on demand, is to gdb ns-slapd with a breakpoint on 
update_pw_encoding just before the user authenticate/search.


thierry

On 11/24/22 13:15, Julian Kippels wrote:
No, the default password policy is set to SSHA, but it also was set to 
this before and then the hash had been upgraded to PBKDF2_SHA256. I 
don't quite know what to make of this, because when I look at the 
source code for version 1.4.4 in 389-ds-base/ldap/servers/slapd/pw.c 
lines 3520 and 3550 it would seem to me that the hash should never 
have been updated to the wrong setting. But it defenitly did, else the 
radius server would have continued working.


I install my servers with an Ansible Playbook that contains the 
following task:


command: dsconf -D "cn=Directory Manager" -w '{{ 
vault_dirsrv_directory_manager_password }}' ldap://localhost pwpolicy 
set --pwdscheme=SSHA


And when I checked using cockpit it was set to SSHA, but still some 
accounts were set to PBKDF2_SHA256.


Julian

Am 24.11.22 um 12:19 schrieb Thierry Bordaz:
That looks weird, it should update the user password. Is 
PBKDF2_SHA256 the default password policy ?


thierry

On 11/24/22 11:48, Julian Kippels wrote:
What exactly are the requirements for the hash upgrade to trigger? I 
have set up a test server, nsslapd-enable-upgrade-hash is set to 
"on" but I cannot get the hashes to convert from SSHA to PBKDF2_SHA256.


I do a bind with directory manager and search for testuser, which 
gives me the SSHA-Hash. Ihen I bind as testuser and perform a 
search. Then I bind as directory manager again and search for 
testuser again. The hash still remains as SSHA.


Julian

Am 22.11.22 um 15:30 schrieb Thierry Bordaz:


On 11/22/22 10:28, Julian Kippels wrote:

Hi Thierry,

that's a nasty catch…

On the one hand I think this is a nice feature to improve 
security, but on the other hand PBKDF2_SHA256 is the one algorithm 
that freeradius cannot cope with.


I suppose there is no way to revert all changed hashes after I set 
"nsslapd-enable-upgrade-hash" to "off"? Other than to reinitialize 
all affected suffixes from the export of the old servers?



Indeed this is a bad side effect of the default value :(

If you need to urgently fix those new {PBKDF2_SHA256}, then reinit 
is the way to go. Else you could change the default password 
storage to SSHA and keep nsslapd-enable-upgrade-hash=on. So that it 
will revert, on bind, to the SSHA hash.


thierry



Julian

Am 22.11.22 um 09:56 schrieb Thierry Bordaz:

Hi Julian,

This is likely the impact of 
https://github.com/389ds/389-ds-base/issues/2480 that was 
introduced in 1.4.x.


On 1.4.4 default hash is PBKDF2, this ticket upgrade hash of user 
entries during the user bind (enabled with 
nsslapd-enable-upgrade-hash).


best regards
thierry

On 11/22/22 09:25, Julian Kippels wrote:

Hi,

We have a radius server that reads the userPassword-attribute 
from ldap to authenticate users. There is a strange phenomenon 
where sometimes the answer from the ldap-server gives the wrong 
password hash algorithm. Our global password policy storage 
scheme is set to SSHA. When I perform a ldapsearch as directory 
manager I see that the password hash for a given user is 
{SSHA}inserthashedpasswordhere. But when I run tcpdump to see 
what our radius is being served I see 
{PBKDF2_SHA256}someotherhash around 50% of the time. Sometime 
another request from radius a few seconds after the first one 
gives the correct {SSHA} response.


This happened right after we updated from 389ds 1.2.2 to 1.4.4.
I am a bit stumped.

Thanks in advance,
Julian
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue









___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.

[389-users] Re: Wrong password hash algorithm returned

2022-11-24 Thread Thierry Bordaz
That looks weird, it should update the user password. Is PBKDF2_SHA256 
the default password policy ?


thierry

On 11/24/22 11:48, Julian Kippels wrote:
What exactly are the requirements for the hash upgrade to trigger? I 
have set up a test server, nsslapd-enable-upgrade-hash is set to "on" 
but I cannot get the hashes to convert from SSHA to PBKDF2_SHA256.


I do a bind with directory manager and search for testuser, which 
gives me the SSHA-Hash. Ihen I bind as testuser and perform a search. 
Then I bind as directory manager again and search for testuser again. 
The hash still remains as SSHA.


Julian

Am 22.11.22 um 15:30 schrieb Thierry Bordaz:


On 11/22/22 10:28, Julian Kippels wrote:

Hi Thierry,

that's a nasty catch…

On the one hand I think this is a nice feature to improve security, 
but on the other hand PBKDF2_SHA256 is the one algorithm that 
freeradius cannot cope with.


I suppose there is no way to revert all changed hashes after I set 
"nsslapd-enable-upgrade-hash" to "off"? Other than to reinitialize 
all affected suffixes from the export of the old servers?



Indeed this is a bad side effect of the default value :(

If you need to urgently fix those new {PBKDF2_SHA256}, then reinit is 
the way to go. Else you could change the default password storage to 
SSHA and keep nsslapd-enable-upgrade-hash=on. So that it will revert, 
on bind, to the SSHA hash.


thierry



Julian

Am 22.11.22 um 09:56 schrieb Thierry Bordaz:

Hi Julian,

This is likely the impact of 
https://github.com/389ds/389-ds-base/issues/2480 that was 
introduced in 1.4.x.


On 1.4.4 default hash is PBKDF2, this ticket upgrade hash of user 
entries during the user bind (enabled with 
nsslapd-enable-upgrade-hash).


best regards
thierry

On 11/22/22 09:25, Julian Kippels wrote:

Hi,

We have a radius server that reads the userPassword-attribute from 
ldap to authenticate users. There is a strange phenomenon where 
sometimes the answer from the ldap-server gives the wrong password 
hash algorithm. Our global password policy storage scheme is set 
to SSHA. When I perform a ldapsearch as directory manager I see 
that the password hash for a given user is 
{SSHA}inserthashedpasswordhere. But when I run tcpdump to see what 
our radius is being served I see {PBKDF2_SHA256}someotherhash 
around 50% of the time. Sometime another request from radius a few 
seconds after the first one gives the correct {SSHA} response.


This happened right after we updated from 389ds 1.2.2 to 1.4.4.
I am a bit stumped.

Thanks in advance,
Julian
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue





___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Wrong password hash algorithm returned

2022-11-22 Thread Thierry Bordaz


On 11/22/22 10:28, Julian Kippels wrote:

Hi Thierry,

that's a nasty catch…

On the one hand I think this is a nice feature to improve security, 
but on the other hand PBKDF2_SHA256 is the one algorithm that 
freeradius cannot cope with.


I suppose there is no way to revert all changed hashes after I set 
"nsslapd-enable-upgrade-hash" to "off"? Other than to reinitialize all 
affected suffixes from the export of the old servers?



Indeed this is a bad side effect of the default value :(

If you need to urgently fix those new {PBKDF2_SHA256}, then reinit is 
the way to go. Else you could change the default password storage to 
SSHA and keep nsslapd-enable-upgrade-hash=on. So that it will revert, on 
bind, to the SSHA hash.


thierry



Julian

Am 22.11.22 um 09:56 schrieb Thierry Bordaz:

Hi Julian,

This is likely the impact of 
https://github.com/389ds/389-ds-base/issues/2480 that was introduced 
in 1.4.x.


On 1.4.4 default hash is PBKDF2, this ticket upgrade hash of user 
entries during the user bind (enabled with nsslapd-enable-upgrade-hash).


best regards
thierry

On 11/22/22 09:25, Julian Kippels wrote:

Hi,

We have a radius server that reads the userPassword-attribute from 
ldap to authenticate users. There is a strange phenomenon where 
sometimes the answer from the ldap-server gives the wrong password 
hash algorithm. Our global password policy storage scheme is set to 
SSHA. When I perform a ldapsearch as directory manager I see that 
the password hash for a given user is 
{SSHA}inserthashedpasswordhere. But when I run tcpdump to see what 
our radius is being served I see {PBKDF2_SHA256}someotherhash around 
50% of the time. Sometime another request from radius a few seconds 
after the first one gives the correct {SSHA} response.


This happened right after we updated from 389ds 1.2.2 to 1.4.4.
I am a bit stumped.

Thanks in advance,
Julian
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Wrong password hash algorithm returned

2022-11-22 Thread Thierry Bordaz

Hi Julian,

This is likely the impact of 
https://github.com/389ds/389-ds-base/issues/2480 that was introduced in 
1.4.x.


On 1.4.4 default hash is PBKDF2, this ticket upgrade hash of user 
entries during the user bind (enabled with nsslapd-enable-upgrade-hash).


best regards
thierry

On 11/22/22 09:25, Julian Kippels wrote:

Hi,

We have a radius server that reads the userPassword-attribute from 
ldap to authenticate users. There is a strange phenomenon where 
sometimes the answer from the ldap-server gives the wrong password 
hash algorithm. Our global password policy storage scheme is set to 
SSHA. When I perform a ldapsearch as directory manager I see that the 
password hash for a given user is {SSHA}inserthashedpasswordhere. But 
when I run tcpdump to see what our radius is being served I see 
{PBKDF2_SHA256}someotherhash around 50% of the time. Sometime another 
request from radius a few seconds after the first one gives the 
correct {SSHA} response.


This happened right after we updated from 389ds 1.2.2 to 1.4.4.
I am a bit stumped.

Thanks in advance,
Julian
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: [EXT]Re: Re: DNA Plugin creating duplicates

2022-08-18 Thread Thierry Bordaz

Hi Todd,


Thanks for your explanations, it make sense.

To make it work, was it enough to add to the definitions 
(uidNumber/gidNumber) (usr/share/dirsrv/schema/*)  'ORDERING 
integerOrderingMatch'.


Or did you have to add 'nsMatchingRule: integerOrderingMatch' to the 
index entry and reindex ?



best regards
thierry


On 8/17/22 5:47 PM, Merritt, Todd R - (tmerritt) wrote:

Hi Theirry,

It looks like that internal search was failing due to not having the 
appropriate matching rules in place. Once I added the indices it 
started to behave correctly. I guess it's probably a bug that it does 
not indicate that the search failed and/or continues to allocate the 
value without really knowing if it's a duplicate or not.


Thanks,
Todd

*From:* Thierry Bordaz 
*Sent:* Wednesday, August 17, 2022 8:30 AM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>; Merritt, Todd R - (tmerritt) 


*Subject:* [EXT]Re: [389-users] Re: DNA Plugin creating duplicates

*External Email*

Hi,


sorry to be late on that thread. DNA should prevent duplicate values 
via internal searches before allocating. If configured ranges from 
server are separated, DNA should not allocate duplicate. Is it 
possible that a direct update could set the attribute managed by DNA ?



regards
Thierry

On 8/11/22 9:59 PM, Merritt, Todd R - (tmerritt) wrote:
Yep, I just tested it out for haha's on my test directory instance 
after skimming the plugin source and seeing that it actually does 
does a range search using >=, <=. Sure enough that seems to have 
resolved it. It did work properly in the current configuration once 
upon a time so either my directory grew enough that the range search 
started to time out without the index or the range search was 
introduced in an update at some point. Thanks for the pointer!


Todd

*From:* Patrick M Landry  
<mailto:patrick.lan...@louisiana.edu>

*Sent:* Thursday, August 11, 2022 12:55 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org> 
<mailto:389-users@lists.fedoraproject.org>

*Subject:* [EXT][389-users] Re: DNA Plugin creating duplicates

*External Email*

Sorry, I just double checked and I *do*​ have 
the integerOrderingMatch Matching Rule configured for uidNumber and 
gidNumber. I have no idea if that would make a difference for you or not.


--
Patrick Landry
Special Projects Engineer
University Computing Support Services
University of Louisiana at Lafayette
P.O. Box 43621
Lafayette, LA 70504
(337) 482-6402
patrick.lan...@louisiana.edu  <mailto:patrick.lan...@louisiana.edu>
–
Université des Acadiens


*From:* Merritt, Todd R - (tmerritt)  
<mailto:tmerr...@arizona.edu>

*Sent:* Thursday, August 11, 2022 2:26 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org> 
<mailto:389-users@lists.fedoraproject.org>

*Subject:* [389-users] Re: DNA Plugin creating duplicates
CAUTION: This email originated from outside of UL Lafayette. Do not 
click links or open attachments unless you recognize the sender and 
know the content is safe.


Thanks, that's a good thought. It looks like I do have the index set 
up though.


dn: cn=uidnumber,cn=index,cn=userroot,cn=ldbm 
database,cn=plugins,cn=config

cn: uidnumber
nsIndexType: eq
nsSystemIndex: False
objectClass: top
objectClass: nsIndex

Does the index also need to support nsMatchingRule: 
integerOrderingMatch for inequality searching?


Todd

*From:* Patrick M Landry  
<mailto:patrick.lan...@louisiana.edu>

*Sent:* Thursday, August 11, 2022 12:16 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org> 
<mailto:389-users@lists.fedoraproject.org>

*Subject:* [EXT][389-users] Re: DNA Plugin creating duplicates

*External Email*

It has been a long time since I set this up and I am running an older 
version of the server but I did find this in my notes:


Before assigning a number to a new entry theDNAplugin searches
the directory to ensure that the number is not already being
used. For this reason indexes had to be created for all of the
attributes which theDNAplugin can assign values to.


Perhaps that is it?
--
Patrick Landry
Special Projects Engineer
University Computing Support Services
University of Louisiana at Lafayette
P.O. Box 43621
Lafayette, LA 70504
(337) 482-6402
patrick.lan...@louisiana.edu  <mailto:patrick.lan...@louisiana.edu>
–
Université des Acadiens

--

[389-users] Re: DNA Plugin creating duplicates

2022-08-17 Thread Thierry Bordaz

Hi,


sorry to be late on that thread. DNA should prevent duplicate values via 
internal searches before allocating. If configured ranges from server 
are separated, DNA should not allocate duplicate. Is it possible that a 
direct update could set the attribute managed by DNA ?



regards
Thierry

On 8/11/22 9:59 PM, Merritt, Todd R - (tmerritt) wrote:
Yep, I just tested it out for haha's on my test directory instance 
after skimming the plugin source and seeing that it actually does does 
a range search using >=, <=. Sure enough that seems to have resolved 
it. It did work properly in the current configuration once upon a time 
so either my directory grew enough that the range search started to 
time out without the index or the range search was introduced in an 
update at some point. Thanks for the pointer!


Todd

*From:* Patrick M Landry 
*Sent:* Thursday, August 11, 2022 12:55 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>

*Subject:* [EXT][389-users] Re: DNA Plugin creating duplicates

*External Email*

Sorry, I just double checked and I *do*​ have the integerOrderingMatch 
Matching Rule configured for uidNumber and gidNumber. I have no idea 
if that would make a difference for you or not.


--
Patrick Landry
Special Projects Engineer
University Computing Support Services
University of Louisiana at Lafayette
P.O. Box 43621
Lafayette, LA 70504
(337) 482-6402
patrick.lan...@louisiana.edu
–
Université des Acadiens


*From:* Merritt, Todd R - (tmerritt) 
*Sent:* Thursday, August 11, 2022 2:26 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>

*Subject:* [389-users] Re: DNA Plugin creating duplicates
CAUTION: This email originated from outside of UL Lafayette. Do not 
click links or open attachments unless you recognize the sender and 
know the content is safe.


Thanks, that's a good thought. It looks like I do have the index set 
up though.


dn: cn=uidnumber,cn=index,cn=userroot,cn=ldbm 
database,cn=plugins,cn=config

cn: uidnumber
nsIndexType: eq
nsSystemIndex: False
objectClass: top
objectClass: nsIndex

Does the index also need to support nsMatchingRule: 
integerOrderingMatch for inequality searching?


Todd

*From:* Patrick M Landry 
*Sent:* Thursday, August 11, 2022 12:16 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>

*Subject:* [EXT][389-users] Re: DNA Plugin creating duplicates

*External Email*

It has been a long time since I set this up and I am running an older 
version of the server but I did find this in my notes:


Before assigning a number to a new entry theDNAplugin searches the
directory to ensure that the number is not already being used. For
this reason indexes had to be created for all of the attributes
which theDNAplugin can assign values to.


Perhaps that is it?
--
Patrick Landry
Special Projects Engineer
University Computing Support Services
University of Louisiana at Lafayette
P.O. Box 43621
Lafayette, LA 70504
(337) 482-6402
patrick.lan...@louisiana.edu
–
Université des Acadiens


*From:* Merritt, Todd R - (tmerritt) 
*Sent:* Thursday, August 11, 2022 12:51 PM
*To:* 389-users@lists.fedoraproject.org 
<389-users@lists.fedoraproject.org>

*Subject:* [389-users] DNA Plugin creating duplicates
CAUTION: This email originated from outside of UL Lafayette. Do not 
click links or open attachments unless you recognize the sender and 
know the content is safe.


Hi,

I'm running 389ds 2.0.15 on a two node cluster in a multi master mode. 
I'm using the DNA plugin to generate unique uid numbers for new 
accounts. Each directory instance is assigned a unique range of uid 
numbers. It works in so far as it assigns a uid number when it gets 
the magic token but whatever is supposed to be verifying that the uid 
number is not already assigned is not working. I've cranked the error 
log level up, but I don't get anything in the logs that is helpful in 
determining why that validation is not working correctly.


# ansible-managed-uidnumber-generation, Distributed Numeric Assignment 
Plugin,

 plugins, config
dn: cn=ansible-managed-uidnumber-generation,cn=Distributed Numeric 
Assignment

 Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: ansible-managed-uidnumber-generation
dnaType: uidNumber
dnaNextValue: 62009
dnaMaxValue: 131000
dnaMagicRegen: generate
dnaFilter: (objectclass=posixAccount)
dnaScope: ou=Accounts,dc=example,dc=edu
dnaSharedCfgDN: ou=ranges,ou=Accounts,dc=example,dc=edu

I'm stumped. Anyone have any direction on how to debug 

[389-users] Re: Retro Changelog trimming causes deadlock

2022-07-20 Thread Thierry Bordaz

Hi Kees,

Please install debuginfo and debugsource rpm from 389-ds and slapi-nis.

once they are installed, you can collect a complete backtrace and also 
collect information about db pages (db_stat -CA -N -h 
/var/lib/dirsrv/slapd-/db/).


This deadlock is possibly 
https://bugzilla.redhat.com/show_bug.cgi?id=1751295, but depends your 
version of slapi-nis. You may hit it if slapi-nis is higher than 
0.56.0-12 and lower than 0.56.5.


regards
thierry


On 7/20/22 4:06 PM, Mark Reynolds wrote:


Hi Kees,

Can you provide the entire/complete stack trace?

Looks like it's the schema-compat plugin from Freeipa that is the 
issue.  We have a lot of problems with this plugin :-(  But without 
the full stack trace we can not confirm anything.


Thanks,

Mark

On 7/20/22 9:59 AM, Kees Bakker wrote:

Hi,

It's me again, about Retro Changelog trimming :-(. Last time it was 
about the maxage

configuration, for which I created an issue [1].

This time, the problem is that of a deadlock. When I have maxage set 
to 2d (the

default), then soon after restart the server starts to do the trimming.

Unfortunately it quickly runs into a deadlock. All accesses to the 
server (e.g ldapsearch)
hang forever. And because this is a replica, the other servers are 
complaining too.


Looking at a gdb stack trace I see the following.

$ sudo cat gdb-trace-ns-slapd-4.txt | grep -E '^(Thread|#[01]
.*lock)'
Thread 41 (Thread 0x7fefa3e72700 (LWP 170190)):
#0  0x7fef9f9b52f5 in pthread_rwlock_wrlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2750 in map_wrlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 40 (Thread 0x7feef147d700 (LWP 170184)):
Thread 39 (Thread 0x7feeef2f9700 (LWP 170178)):
Thread 38 (Thread 0x7feef1c7e700 (LWP 170171)):
Thread 37 (Thread 0x7feef247f700 (LWP 170169)):
Thread 36 (Thread 0x7feef37ff700 (LWP 170166)):
Thread 35 (Thread 0x7feef67ff700 (LWP 170165)):
Thread 34 (Thread 0x7feef75fe700 (LWP 170164)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 33 (Thread 0x7feef7dff700 (LWP 170163)):
Thread 32 (Thread 0x7feef89fe700 (LWP 170162)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 31 (Thread 0x7feef91ff700 (LWP 170161)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 30 (Thread 0x7feef9dfe700 (LWP 170160)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 29 (Thread 0x7feefa7ff700 (LWP 170159)):
Thread 28 (Thread 0x7feefb7ff700 (LWP 170158)):
Thread 27 (Thread 0x7feefc3fe700 (LWP 170157)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 26 (Thread 0x7feefcdff700 (LWP 170156)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 25 (Thread 0x7feefe1fe700 (LWP 170155)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 24 (Thread 0x7feefebff700 (LWP 170154)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 23 (Thread 0x7feeff7da700 (LWP 170153)):
Thread 22 (Thread 0x7feefffdb700 (LWP 170152)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 21 (Thread 0x7fef007dc700 (LWP 170151)):
Thread 20 (Thread 0x7fef00fdd700 (LWP 170150)):
#0  0x7fef9f9b4ec2 in pthread_rwlock_rdlock () at
target:/lib64/libpthread.so.0
#1  0x7fef8e9f2612 in map_rdlock () at
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 19 (Thread 0x7fef02fd9700 (LWP 170148)):
Thread 18 (Thread 0x7fef037da700 (LWP 170147)):
Thread 17 (Thread 0x7fef03fdb700 (LWP 170146)):
Thread 16 (Thread 0x7fef049e3700 (LWP 170145)):
Thread 15 (Thread 0x7fef051e4700 (LWP 

[389-users] Re: Retro Changelog trimming not working

2022-07-13 Thread Thierry Bordaz


On 7/13/22 5:08 PM, Kees Bakker wrote:



On 13-07-2022 16:31, Thierry Bordaz wrote:

 EXTERNAL E-MAIL 


On 7/13/22 3:18 PM, Kees Bakker wrote:

On 13-07-2022 13:39, Kees Bakker wrote:

On 13-07-2022 13:01, Kees Bakker wrote:

Hi,

[...]
In other words, with 1.4.3.28 I don't get to see the message with 
first_time and cur_time. I'm
quite puzzled how that can happen. The code is like this (stripped 
a bit):


    if (!ts.ts_s_trimming) {
    int must_trim = 0;
    /* See if we need to trim */
    /* Has enough time elapsed since our last check? */
    if (cur_time - ts.ts_s_last_trim >= (ts.ts_c_max_age)) {



I agree with you, the use of ts_c_max_age is wrong. The trimming 
mechanism should run periodically, whatever the maxage of the records. 
Here we should check an interval of time, (each hour or 10 min...). 
Would you please open a ticket for that ?


thanks
thierry


    /* Is the first entry too old? */
    time_t first_time;
...
    slapi_log_err(SLAPI_LOG_PLUGIN, RETROCL_PLUGIN_NAME,
  "cltrim: ldrc=%d, first_time=%ld,
cur_time=%ld\n",
  ldrc, first_time, cur_time);
    if (LDAP_SUCCESS == ldrc && first_time > (time_t)0L &&
    first_time + ts.ts_c_max_age < now_maxage)
    {
    must_trim = 1;
    }
    }
    if (must_trim) {
...
    } else {
    slapi_log_err(SLAPI_LOG_PLUGIN, RETROCL_PLUGIN_NAME,
  "retrocl_housekeeping - changelog
does not need to be trimmed\n");
    }
    }

Puzzled, because I don't understand why "cur_time - 
ts.ts_s_last_trim >= (ts.ts_c_max_age)"

is FALSE.


Unless, ...
cur_time is the relative time since start of the server (not sure 
if this is true,

but the code in eq_call_all_rel() seems to suggest it)
ts.ts_s_last_trim is 0 at startup

Shouldn't we compare "first_time" with the current (non-relative) time?


First an answer to Pierre's question
I had nsslapd-changelogmaxage set to 480d. (The reason for that is 
that I just want to trim

the small subset.)

I changed maxage back to the default 2d. Now I see the this message, 
quickly after restart.


Jul 13 14:33:13 iparep4.example.com ns-slapd[141407]:
[13/Jul/2022:14:33:13.806790126 +0200] - DEBUG - DSRetroclPlugin
- cltrim: ldrc=0, first_time=1615990532, cur_time=12006600

I'm now convinced that the logic is (still) flawed. There is still a 
mix of UTC time and relative time.


    int must_trim = 0;
    /* See if we need to trim */
    /* Has enough time elapsed since our last check? */
    if (cur_time - ts.ts_s_last_trim >= (ts.ts_c_max_age)) {
<<<< wrong condition
    /* Is the first entry too old? */
    time_t first_time;
    time_t now_maxage = slapi_current_utc_time(); /*
real time for trimming candidates */



Hi Kees,

The mix of monotonic/real time was already an issue 
(https://github.com/389ds/389-ds-base/issues/4869) that was fixed in 
you version 1.4.3.28-6.


I think 'first_time' is coming from the first registered record in 
the retroCL.  I wonder if it could be because of an old retroCL DB 
containing an old value, not matching current monotonic time. Could 
you dump the first record of the retroCL (dbscan) ?


regards
thierry


Hi Thierry,

Here is an example of an old changelog entry.

[root@iparep4 ~]# ldapsearch -H ldaps://$HOSTNAME -D "cn=Directory 
Manager" -y p -LLL -o ldif-wrap=no -b 'changenumber=2,cn=changelog'

dn: changenumber=2,cn=changelog
objectClass: top
objectClass: changelogentry
objectClass: extensibleObject
targetuniqueid: 23f6980a-87f611eb-872490e8-ab7c8ee8
changeNumber: 2
targetDn: 
idnsName=110,idnsname=30.16.172.in-addr.arpa.,cn=dns,dc=example,dc=com

changeTime: 20210318152832Z
changeType: delete

Yes, I know about issue 4869. That's why I am now trying 1.4.3.28

But I want you to take another look at this line

if (cur_time - ts.ts_s_last_trim >= (ts.ts_c_max_age)) {

This code has the effect of using "maxage"  to determine when to do 
the trim run!!!.

This is wrong.

And a few lines further

    if (LDAP_SUCCESS == ldrc && first_time > (time_t)0L &&
    first_time + ts.ts_c_max_age < now_maxage)
    {
    must_trim = 1;
    }

Here "maxage" is used the way it should be used, as a maximum age for 
the changelog

entries. This is good.

If I set maxage to 480 then I have to wait until 480 days after 
restarting the server before it finally

decides to look at first_time.

Maybe now is also a good time to explain why I set a much higher 
maxage value.
When trimming finally k

[389-users] Re: Retro Changelog trimming not working

2022-07-13 Thread Thierry Bordaz


On 7/13/22 3:18 PM, Kees Bakker wrote:

On 13-07-2022 13:39, Kees Bakker wrote:

On 13-07-2022 13:01, Kees Bakker wrote:

Hi,

[...]
In other words, with 1.4.3.28 I don't get to see the message with 
first_time and cur_time. I'm
quite puzzled how that can happen. The code is like this (stripped a 
bit):


    if (!ts.ts_s_trimming) {
    int must_trim = 0;
    /* See if we need to trim */
    /* Has enough time elapsed since our last check? */
    if (cur_time - ts.ts_s_last_trim >= (ts.ts_c_max_age)) {
    /* Is the first entry too old? */
    time_t first_time;
...
    slapi_log_err(SLAPI_LOG_PLUGIN, RETROCL_PLUGIN_NAME,
  "cltrim: ldrc=%d, first_time=%ld,
cur_time=%ld\n",
  ldrc, first_time, cur_time);
    if (LDAP_SUCCESS == ldrc && first_time > (time_t)0L &&
    first_time + ts.ts_c_max_age < now_maxage)
    {
    must_trim = 1;
    }
    }
    if (must_trim) {
...
    } else {
    slapi_log_err(SLAPI_LOG_PLUGIN, RETROCL_PLUGIN_NAME,
  "retrocl_housekeeping - changelog does
not need to be trimmed\n");
    }
    }

Puzzled, because I don't understand why "cur_time - 
ts.ts_s_last_trim >= (ts.ts_c_max_age)"

is FALSE.


Unless, ...
cur_time is the relative time since start of the server (not sure if 
this is true,

but the code in eq_call_all_rel() seems to suggest it)
ts.ts_s_last_trim is 0 at startup

Shouldn't we compare "first_time" with the current (non-relative) time?


First an answer to Pierre's question
I had nsslapd-changelogmaxage set to 480d. (The reason for that is 
that I just want to trim

the small subset.)

I changed maxage back to the default 2d. Now I see the this message, 
quickly after restart.


Jul 13 14:33:13 iparep4.example.com ns-slapd[141407]:
[13/Jul/2022:14:33:13.806790126 +0200] - DEBUG - DSRetroclPlugin -
cltrim: ldrc=0, first_time=1615990532, cur_time=12006600

I'm now convinced that the logic is (still) flawed. There is still a 
mix of UTC time and relative time.


    int must_trim = 0;
    /* See if we need to trim */
    /* Has enough time elapsed since our last check? */
    if (cur_time - ts.ts_s_last_trim >= (ts.ts_c_max_age))
{   wrong condition
    /* Is the first entry too old? */
    time_t first_time;
    time_t now_maxage = slapi_current_utc_time(); /* real
time for trimming candidates */



Hi Kees,

The mix of monotonic/real time was already an issue 
(https://github.com/389ds/389-ds-base/issues/4869) that was fixed in you 
version 1.4.3.28-6.


I think 'first_time' is coming from the first registered record in the 
retroCL.  I wonder if it could be because of an old retroCL DB 
containing an old value, not matching current monotonic time. Could you 
dump the first record of the retroCL (dbscan) ?


regards
thierry



--
Kees


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389 scalability

2022-05-19 Thread Thierry Bordaz


On 5/19/22 1:51 AM, William Brown wrote:



On 19 May 2022, at 00:48, Morgan Jones  wrote:

Hello Everyone,

We are merging our student directory (about 200,000 entries) into our existing 
employee directory (about 25,000 entries).

They're a pair of multi-master replicas on virtual hardware that can easily be 
expanded if needed though hardware performance hasn't been an issue.

Does this  justify creating  separate database for students?  Aside from basic 
tuning are here any big pitfalls we should look out for?

I think extra databases creates more administration overhead than benefit. The benefit to 
extra databases is "improved write performance" generally speaking. But the 
trade is subtree queries are more complex to eval for the server.

It's far easier for you the admin, and also support staff if you keep it as a 
single db. We have done huge amounts to improve parallel reads in recent years, 
so you should see large gains when you change from 1.3 to 1.4 or 2.0 :)


I fully agree with William points. Just a comment if you decide to merge 
the students entries into the employee directory. Either you will have 
to ADD 200K students entries into your existing directory and it will 
take several hours. Either you will use import (e.g. merge 
employee/students ldifs and reimport), that will require a reinit of the 
topology. The second option would be the fastest as import + reinit of a 
db with 225K entries should be fast.




We're still on CentOS 7 for the time being:
[root@prdds21 morgan]# rpm -qa|grep 389
389-admin-1.1.46-4.el7.x86_64
389-console-1.1.19-6.el7.noarch
389-dsgw-1.1.11-5.el7.x86_64
389-admin-console-1.1.12-1.el7.noarch
389-ds-1.2.2-6.el7.noarch
389-ds-base-libs-1.3.10.2-13.el7_9.x86_64
389-ds-base-1.3.10.2-13.el7_9.x86_64
389-adminutil-1.1.22-2.el7.x86_64
389-admin-console-doc-1.1.12-1.el7.noarch
389-ds-console-doc-1.2.16-1.el7.noarch
389-ds-console-1.2.16-1.el7.noarch
[root@prdds21 morgan]#

thank you,

-morgan



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

--
Sincerely,

William Brown

Senior Software Engineer,
Identity and Access Management
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Absolute True and False Filters

2022-05-12 Thread Thierry Bordaz


On 5/12/22 3:13 PM, Mike Mercier wrote:

Hello,

I am attempting to use the Microsoft ECMA Connector (Azure AD Connect) 
to synchronize user information from Azure AD to 389DS.  Microsoft 
does claim 389DS is supported, see:


https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure 



While configuring the ECMA connector wizard, the 'Global' page 
displays the following message:


Mandatory Features Not Found:
[1.3.1.4.1.4203.1.5.3] True/False Filters


Hello,

My understanding of [1], is that it is quite common that LDAP server 
does not report this feature and you are right 389ds does not report it.
It is mentioned that "If you can import more than one object type, then 
your LDAP server supports this feature.". Object Type is looking to be 
the objectclass attribute of an ldap entry. 389ds supports entries with 
multiple objectclass values, so even if it is not listed it looks to me 
it supports that feature.


[1] 
https://docs.microsoft.com/en-us/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap#required-controls-and-features


regards
Thierry



I believe the below command displays what is supported?
[root@localhost ~]# ldapsearch -H ldap://localhost -x -s base -b "" +

I do not see the specific OID from above listed in the output.  Is the 
feature supported by 389DS?  Is there a plugin available that will add 
support?


Anyone have any experience trying to sync information 
between 389DS and Azure AD?


Thanks,
Mike

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389DS + Ubuntu

2022-03-31 Thread Thierry Bordaz


On 3/31/22 2:25 PM, iyagomailru Alexander Yakovlev wrote:

Mark, Thierry, thank You.
I would really want to execute this command, but the 'config' option is missing 
in my version 389-ds, so I was asking for advice on how to configure it in 
another way.


I am not expert of mdb use but I think the switch is simply to stop DS, 
edit dse.ldif


dn: cn=config,cn=ldbm database,cn=plugins,cn=config
nsslapd-backend-implement: mdb

Then restart ds.




Here is the result of the command executing:
root@389ldap-test:~# dsconf slapd-instance backend config set --db_lib mdb
usage: dsconf instance backend [-h] {list,get,get_dn,create,delete} ...
dsconf instance backend: error: invalid choice: 'config' (choose from 'list', 
'get', 'get_dn', 'create', 'delete')


Version of 389-ds:
root@389ldap-test:~# apt show 389-ds
Package: 389-ds
Version: 1.3.7.10-1ubuntu1
Priority: optional
Section: universe/net
Source: 389-ds-base
Origin: Ubuntu
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389DS + Ubuntu

2022-03-31 Thread Thierry Bordaz

Hi,

I think the command should be 'dsconf instance backend config set 
--db_lib mdb'.


Now I am unsure if it is sufficient to switch to mdb database. Pierre ?

regards
thierry

On 3/31/22 12:20 PM, iyagomailru Alexander Yakovlev wrote:

More precisely, there is no backend option
# dsconf instance backend config
usage: dsconf instance backend [-h] {list,get,get_dn,create,delete} ...
dsconf instance backend: error: invalid choice: 'config' (choose from 'list', 
'get', 'get_dn', 'create', 'delete')
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: unconventional replication, alma 8 master to centos 7 slave: Unable to acquire replica: error: no such replica

2022-03-24 Thread Thierry Bordaz


On 3/24/22 2:17 PM, Mark Reynolds wrote:


On 3/24/22 8:38 AM, Lewis Robson wrote:

Hello all,

i am working to do multi master with two different versions of OS 
(alma 8 and centos 7), this means that the 389 on alma 8 is using 
dsidm and cockpit and the 389 on centos 7 is using 389console with 
ldap commands.



the alma 8 directory tree is how we want it to be, users inside, all 
working as expected.


the 7 directory tree is the complete standard given when 389ds is setup.


on the 7 machine (slave) I have the bind dn information of 
cn=replication manager,cn=config.
This has been set up on the 8 mschine via cockpit in the replication 
agreement to connect with these credentials. an ldapsearch lets me 
connect with them and purposely typing the username or password wrong 
for the agreement gives a different error so im confident the account 
is okay.



The error I see, when i try and initiliaze the agreement from the 8 
cockpit view to the slave machine is:


ERR - NSMMReplicationPlugin - 
multimaster_extop_StartNSDS50ReplicationRequest - conn=289 op=3 
replica="unknown": Unable to acquire replica: error: no such replica


Couple things here, are the RHEL 7 servers set up as replication 
consumers?  Yes you need the replication manager setup, but the suffix 
needs to be enabled for replication as well.  Can you do a ldapsearch 
on cn=config searching for "objectclass=nsds5replica" and share the 
output?



I agree with Mark, an issue is likely in replication agreement 
definition. According to the error message it looks the consumer (centos 
7) can not retrieve the replicaroot from the replication extop. A 
possibility is that the replication agreement (on alma 8) is missing 
'nsDS5ReplicaRoot'.





My other concern is about the error message above, is that from a RHEL 
8 replica?  If so, this indicates replication is not setup properly on 
that suffix, but you say all the rhel 8 replicas are working.  Are you 
using multiple backends/suffixes or just one? If you are using 
multiple backends then maybe you have a mismatch in your replication 
config?  Becuase that error about "unknown" replica means the "suffix" 
was not configured for replication. Was this error from a RHEL 8 
replica?  If so run these commands:


Change the suffix value to your suffix:

# dsconf slapd-YOUR_INSTANCE replication get --suffix dc=example,dc=com

# dsconf slapd-YOUR_INSTANCE repl-agmt list --suffix dc=example,dc=com

If nothing sticks out try turning on replication logging 
(nsslapd-erorrlog-level: 8192) - you can do this from the Cockpit UI 
as well.


Thanks,

Mark





Does anyone know anything that I could check for the error to get 
around this?



Thankyou kindly.

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Replication Problem

2022-01-31 Thread Thierry Bordaz
It would be better to have errors logs (with replication debug enabled) 
from both instances A and B


Also in the logs who is A and B (ldapserver1 and ldapserver2) ?

thanks


On 1/31/22 9:02 AM, Mansoor Raeesi wrote:

That is problem with CentOS paste which expires pastes within 24 hours!

you may check through this link:

https://pastebin.ubuntu.com/p/ktN5HsBrNf/


Thanks



On 1/31/22 11:24, Thierry Bordaz wrote:

Hi,

It returns 404 "page not found"

regards
thierry

On 1/30/22 7:58 AM, Mansoor Raeesi wrote:
Thanks for your kind reply, logging is enabled already and this is 
output of log:


https://paste.centos.org/view/a39010cd


On 1/26/22 12:14, Thierry Bordaz wrote:

Hi,

There are several possible cause why the replication agreement 
failed to complete the total update. I suggest you enable 
replication debug log on A and B 
(https://www.port389.org/docs/389ds/FAQ/faq.html#Troubleshooting), 
before retrying a total update. If it is the first time you are 
trying to init B, a common failure is that the RA fails to bind 
(credential are not properly set).


Because of the size of the DB, another option is to init B via an 
offline import. (on A export DB in ldif format with replication 
data, send the ldif file to B, import the ldif file on B). This 
likely speed up the initialization of B but you will still need to 
fix the RA A->B.


regards
thierry

On 1/26/22 6:41 AM, Mansoor Raeesi wrote:

Hi

I've recently started 2 different instances on different servers 
with 1.4.4.17 version of 389-ds. servers can see each other.


server A has a database around 28GB & both servers are started in 
Master mode. i've created an agreement on server A to be 
replicated with server B on port 389 when i initialize the 
agreement, after a while i'll receive this error in web console:


ERR - NSMMReplicationPlugin - repl5_tot_run - Total update failed 
for replica "agmt="cn=ServerA-to-ServerB" (ServerB:389)", error (-1)


Looking forward for your kind help.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure







___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Replication Problem

2022-01-26 Thread Thierry Bordaz

Hi,

There are several possible cause why the replication agreement failed to 
complete the total update. I suggest you enable replication debug log on 
A and B 
(https://www.port389.org/docs/389ds/FAQ/faq.html#Troubleshooting), 
before retrying a total update. If it is the first time you are trying 
to init B, a common failure is that the RA fails to bind (credential are 
not properly set).


Because of the size of the DB, another option is to init B via an 
offline import. (on A export DB in ldif format with replication data, 
send the ldif file to B, import the ldif file on B). This likely speed 
up the initialization of B but you will still need to fix the RA A->B.


regards
thierry

On 1/26/22 6:41 AM, Mansoor Raeesi wrote:

Hi

I've recently started 2 different instances on different servers with 
1.4.4.17 version of 389-ds. servers can see each other.


server A has a database around 28GB & both servers are started in 
Master mode. i've created an agreement on server A to be replicated 
with server B on port 389 when i initialize the agreement, after a 
while i'll receive this error in web console:


ERR - NSMMReplicationPlugin - repl5_tot_run - Total update failed for 
replica "agmt="cn=ServerA-to-ServerB" (ServerB:389)", error (-1)


Looking forward for your kind help.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Log4j patch/update for 1.3.x

2021-12-21 Thread Thierry Bordaz

Hi,

You are right, onlly java console could be affected but none of the RHDS 
versions (including 1.3) is impacted by Log4j CVE 
(https://access.redhat.com/security/vulnerabilities/RHSB-2021-009). So 
there is no plan to release a patch in 1.3 for this CVE.


best regards
thierry

On 12/21/21 1:07 AM, William Brown wrote:

Only the 389 console would be affected, and I think RH are the only group 
supporting that. Generally they are very good about patching and updates, but I 
don't have details for this.


On 21 Dec 2021, at 06:38, Paul Whitney  wrote:

Will there be a patch release for 1.3.x to address these Log4j vulnerabilities?

Paul M. Whitney, RHCSA, CISSP
Chesapeake IT Consulting, Inc.
2680 Tobacco Rd
Chesapeake Beach, MD 20732

Work: 443-492-2872
Cell:   410.493.9448
Email: paul.whit...@chesapeake-it.com
CONFIDENTIALITY NOTICE
The information contained in this facsimile or electronic message is 
confidential information intended for the use of the individual or entity named 
above. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this facsimile message to the 
intended recipient, you are hereby notified that any dissemination, or copying 
of this communication is strictly prohibited. If this message contains 
non-public personal information about any consumer or customer of the sender or 
intended recipient, you are further prohibited under penalty of law from using 
or disclosing the information to any third party by provisions of the federal 
Gramm-Leach-Bliley Act. If you have received this facsimile or electronic 
message in error, please immediately notify us by telephone and return or 
destroy the original message to assure that it is not read, copied, or 
distributed by others.

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

--
Sincerely,

William Brown

Senior Software Engineer, Identity and Access Management
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389-DS Internal unindexed search

2021-11-15 Thread Thierry Bordaz


On 11/15/21 7:39 PM, Ludwig Krispenz wrote:



On 15.11.21 15:55, Mark Reynolds wrote:



On 11/15/21 9:46 AM, Pierre Rogier wrote:
I feel a bit weird that we try to perform substring searches in the 
referential integrity plugin.

I would rather expect equality searches.
Does anyone know why the * are needed ?


It is used for MODRDN's like Thierry stated.  The code states that we 
use the substring filter to find the children memberships of the old 
DN so they can be properly cleaned up. I'm not sure this can be 
optimized to /not/ use a substring filter


yes, if the MODRDN is applied to an entry with children, like 
inThierry's example, I don't think the subtree search can be avoided. 
But when the referential integrity function is applied, it should be 
known if the renamed entry has children or is a leaf node - and could 
avoid the substring searches in most cases.



Hellooo Ludwig,

Thanks for this nice and simple idea. I opened 
https://github.com/389ds/389-ds-base/issues/5004.


best regards
thierry


Regards,

Ludwig


Mark



On Mon, Nov 15, 2021 at 3:22 PM Thierry Bordaz  
wrote:


Hi,

The referential integrity plugins uses internal searches to
retrieve
which entries referred to the target entry. The plugin uses
equality
searches, that are indexed, but for MODRDN it uses substring
filter. As
membership attributes (member, uniquemember,...) are not indexed in
substring, each MODRDN triggers 4 (4 membership attributes)
unindexed
searches. referint being a betxn plugin, unindexed search are
prone to
create db retries and could be related to replication failure.

I would recommand that you try adding substring index to the
member,
uniquemember, owner and seealso.

regards
thierry

On 11/15/21 3:00 PM, Ciber dgtnt wrote:
> Hi, I have a problem in 389-ds version 1.3.10.2-10 intalled on
Centos7 , we have a multimaster enviroment with consumers and
suppliers, we have referential integrity plugin to control the
group members. In the master node where we have the referential
integrity pluggin enabled, ocasionally we get this message in
the error log :
>
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2 filter="(member=*uid=dmarmedr,ou=Baja
de cuentas,c=es)" conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2
filter="(uniquemember=*uid=dmarmedr,ou=Baja de cuentas,c=es)"
conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2 filter="(owner=*uid=dmarmedr,ou=Baja
de cuentas,c=es)" conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2
filter="(seeAlso=*uid=dmarmedr,ou=Baja de cuentas,c=es)" conn=0 op=0
>
> And when it happends the slapd proccess takes up to 100% CPU
usage and we see the message you can see bellow, because this
master node begins to avoid replication sessions from other
master nodes:
>
> ERR - NSMMReplicationPlugin - process_postop - Failed to apply
update (615f51ea0023) error (51). Aborting replication
session(conn=1145 op=4)
>
> All those internal unindexed searchs have filters with the
attributes configured in the referential integrity plugin,
member=*, uniquemember=*, owner=* and seealso=*. Those
attributes has equality and presence indexes, we don't
understand why the log says "internal unindexed search".
>
> Can anyone help me with this problem?, maby is it necessary
other type of index?
>
> Thanks & Regards
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-

[389-users] Re: 389-DS Internal unindexed search

2021-11-15 Thread Thierry Bordaz


On 11/15/21 3:55 PM, Mark Reynolds wrote:



On 11/15/21 9:46 AM, Pierre Rogier wrote:
I feel a bit weird that we try to perform substring searches in the 
referential integrity plugin.

I would rather expect equality searches.
Does anyone know why the * are needed ?


IIUC it is to handle this testcase

dn: ou=group, dc=example,dc=com
member: uid=foo,ou=people,dc=example,dc=com

dn: ou=people,dc=example,dc=com
ou: people

dn: uid=foo,ou=people,dc=example,dc=com
uid: foo

MODRDN(ou=people,dc=example,dc=com -> ou=new_people,dc=com)

Then this code is to get

dn: ou=group,dc=example,dc=com
member: uid=foo,ou=new_people,dc=com


It is used for MODRDN's like Thierry stated.  The code states that we 
use the substring filter to find the children memberships of the old 
DN so they can be properly cleaned up. I'm not sure this can be 
optimized to /not/ use a substring filter


Mark



On Mon, Nov 15, 2021 at 3:22 PM Thierry Bordaz  
wrote:


Hi,

The referential integrity plugins uses internal searches to retrieve
which entries referred to the target entry. The plugin uses equality
searches, that are indexed, but for MODRDN it uses substring
filter. As
membership attributes (member, uniquemember,...) are not indexed in
substring, each MODRDN triggers 4 (4 membership attributes)
unindexed
searches. referint being a betxn plugin, unindexed search are
prone to
create db retries and could be related to replication failure.

I would recommand that you try adding substring index to the member,
uniquemember, owner and seealso.

regards
thierry

On 11/15/21 3:00 PM, Ciber dgtnt wrote:
> Hi, I have a problem in 389-ds version 1.3.10.2-10 intalled on
Centos7 , we have a multimaster enviroment with consumers and
suppliers, we have referential integrity plugin to control the
group members. In the master node where we have the referential
integrity pluggin enabled, ocasionally we get this message in the
error log :
>
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2 filter="(member=*uid=dmarmedr,ou=Baja
de cuentas,c=es)" conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2
filter="(uniquemember=*uid=dmarmedr,ou=Baja de cuentas,c=es)"
conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2 filter="(owner=*uid=dmarmedr,ou=Baja
de cuentas,c=es)" conn=0 op=0
> NOTICE - ldbm_back_search - Internal unindexed search: source
(cn=referential integrity postoperation,cn=plugins,cn=config)
search base="c=es" scope=2 filter="(seeAlso=*uid=dmarmedr,ou=Baja
de cuentas,c=es)" conn=0 op=0
>
> And when it happends the slapd proccess takes up to 100% CPU
usage and we see the message you can see bellow, because this
master node begins to avoid replication sessions from other
master nodes:
>
> ERR - NSMMReplicationPlugin - process_postop - Failed to apply
update (615f51ea0023) error (51). Aborting replication
session(conn=1145 op=4)
>
> All those internal unindexed searchs have filters with the
attributes configured in the referential integrity plugin,
member=*, uniquemember=*, owner=* and seealso=*. Those attributes
has equality and presence indexes, we don't understand why the
log says "internal unindexed search".
>
> Can anyone help me with this problem?, maby is it necessary
other type of index?
>
> Thanks & Regards
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:

https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:

https://lists.fedoraproject.org/archi

[389-users] Re: 389-DS Internal unindexed search

2021-11-15 Thread Thierry Bordaz

Hi,

The referential integrity plugins uses internal searches to retrieve 
which entries referred to the target entry. The plugin uses equality 
searches, that are indexed, but for MODRDN it uses substring filter. As 
membership attributes (member, uniquemember,...) are not indexed in 
substring, each MODRDN triggers 4 (4 membership attributes) unindexed 
searches. referint being a betxn plugin, unindexed search are prone to 
create db retries and could be related to replication failure.


I would recommand that you try adding substring index to the member, 
uniquemember, owner and seealso.


regards
thierry

On 11/15/21 3:00 PM, Ciber dgtnt wrote:

Hi, I have a problem in 389-ds version 1.3.10.2-10 intalled on Centos7 , we 
have a multimaster enviroment with consumers and suppliers, we have referential 
integrity plugin to control the group members. In the master node where we have 
the referential integrity pluggin enabled, ocasionally we get this message in 
the error log :

NOTICE - ldbm_back_search - Internal unindexed search: source (cn=referential integrity 
postoperation,cn=plugins,cn=config) search base="c=es" scope=2 
filter="(member=*uid=dmarmedr,ou=Baja de cuentas,c=es)" conn=0 op=0
NOTICE - ldbm_back_search - Internal unindexed search: source (cn=referential integrity 
postoperation,cn=plugins,cn=config) search base="c=es" scope=2 
filter="(uniquemember=*uid=dmarmedr,ou=Baja de cuentas,c=es)" conn=0 op=0
NOTICE - ldbm_back_search - Internal unindexed search: source (cn=referential integrity 
postoperation,cn=plugins,cn=config) search base="c=es" scope=2 
filter="(owner=*uid=dmarmedr,ou=Baja de cuentas,c=es)" conn=0 op=0
NOTICE - ldbm_back_search - Internal unindexed search: source (cn=referential integrity 
postoperation,cn=plugins,cn=config) search base="c=es" scope=2 
filter="(seeAlso=*uid=dmarmedr,ou=Baja de cuentas,c=es)" conn=0 op=0

And when it happends the slapd proccess takes up to 100% CPU usage and we see 
the message you can see bellow, because this master node begins to avoid 
replication sessions from other master nodes:

ERR - NSMMReplicationPlugin - process_postop - Failed to apply update 
(615f51ea0023) error (51). Aborting replication session(conn=1145 op=4)

All those internal unindexed searchs have filters with the attributes configured in the 
referential integrity plugin, member=*, uniquemember=*, owner=* and seealso=*. Those 
attributes has equality and presence indexes, we don't understand why the log says 
"internal unindexed search".

Can anyone help me with this problem?, maby is it necessary other type of index?

Thanks & Regards
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389 1.3 vs 1.4, CentOS 7

2021-11-10 Thread Thierry Bordaz

Hi Morgan,

389 1.3 and 1.4 are both advisable in production. You may hit some 
dependencies difficulties building 1.4 on centos7, as 1.3 was released 
on centos7 and 1.4 on centos8.


I would suggest that you upgrade to centos8 as 1.4 contains more 
features and improvements but if you target centos7 then 1.3 is the 
version to go.


my 2cts

regards
thierry

On 11/10/21 4:18 AM, Morgan Jones wrote:

Hello!

Is it advisable to run 389 1.3 in production?

If not is there a suggested way to install 1.4 in CentOS 7?  On first blush to 
install 389 from source  it’s looking like I’m going to need to install libicu 
from source, the version that ships is an older version:


checking for ICU... no
configure: error: Package requirements (icu-i18n >= 60.2) were not met:

Requested 'icu-i18n >= 60.2' but version of icu-i18n is 50.2


thank you,

-morgan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Cleaning up a disabled replica

2021-11-03 Thread Thierry Bordaz

In addition to the previous feedbacks, some comments inlined

On 11/1/21 11:57 PM, Morgan, Iain (ARC-TN)[InuTeq, LLC] wrote:

Hi,

I've got a bit of an unusual situation. I have two test servers that were 
configured as a multi-master replication pair. One of the servers needed to be 
used for some separate testing, which required disabling the replication. In 
the meantime, the second server has been heavily used for regression tests.

Despite the replication agreements having been disabled for months now, the 
changelog on the second server continues to grow. It has reached the point 
where the size has become troublesome, but I am having trouble alleviating the 
situation.


The first server get isolated for a long time. If it is very late, it 
may last a long time to get it back in sync with the second server. Do 
you eventually expect to let replication sync it back or to simply 
reinit it from the second server ?





I initially tried compacting the changelog, but that made no difference. I later 
noticed using dbscan -f" that entries aren't being timed out from the 
changelog. Essentially, it looks like entries are being added to the changelog as we 
do our periodic regression tests; but since no replication session started, the 
changelog does not get cleaned up.
Changelog (S2) will keep the updates that have not been propagated to 
S1. So if RA S2->S1 exist, even disabled , changelog will not be trimmed.
To trim the changelog, I suggest to remove the RA S2->S1. The 
consequences will be that changelog will be trimmed but you will 
eventually need to reinit S2->S1.


I tried enabling the replication agreement while the first server was down, in 
the hopes that the cleanup would be triggered. But, that did not work. Is there 
a way to force the cleanup? Alternatively, since we don't care about the 
changes, can the changelog safely be deleted?


Yes this does not change the status of S1, seen by S2. From S2 pov, S1 
is very late and all the updates, that can get it back insync, are kept.


Deletion for the changelog can be done via demoting S2 to consumer 
(read-only) role.




Note, I'd prefer to not delete the replication agreement itself, but I would 
appreciate a way to either prevent entries from being added into the changelog 
for now or a way to ensure that the entries do not accumulate over time.


I see no possibility for this. If you keep the replication agreement, 
trimming will take it into consideration and will not trim. The only way 
to prevent updates to no go into the changelog is to disable replication 
or demote the replica.


regards
thierry



Thanks,


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: global passwd policy for DS with existing users

2021-09-14 Thread Thierry Bordaz


On 9/14/21 4:33 PM, Ghiurea, Isabella wrote:


Thank you   both of you ,

From the documentation pointed by Thierry , seem the TPR ( Temporary 
Password Rule)  can be the solution to have all users existing  old 
password updated/force update by DS Manager( with ldap modify)  and 
 only next  when the user  logging for first time will force to change 
the passwd according to the  Password Expiration Policy cfg in DS,will 
this design works?


This description is fulfill with passwordMustChange only. When DM reset 
the password, the only thing that the user can do after he binds (using 
the reset password) is to change his password.


TPR just extends that mechanism with:

 * If the user does not bind/change password within a fixed delay, then
   the reset password expires (temporary password)
 * If the user authenticate (successfully or not) more than a fixed
   number of time, then the reset password expires.
 * The reset password is valid to authenticate a fixed delay after the
   reset time

regards
thierry


Isabella

*From:*Thierry Bordaz [mailto:tbor...@redhat.com]
*Sent:* September 14, 2021 7:13 AM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>; Mark Reynolds 
; Ghiurea, Isabella 

*Subject:* Re: [389-users] Re: global passwd policy for DS with 
existing users


/***ATTENTION*** This email originated from outside of the NRC. 
***ATTENTION*** Ce courriel provient de l'extérieur du CNRC/


On 9/14/21 3:15 PM, Mark Reynolds wrote:

On 9/10/21 5:14 PM, Ghiurea, Isabella wrote:

1.Thank you Mark,

2. I am considering  the  DS global password Policy with  the
configuration to have the users  “must” change their passwords
according to a schedule.

If the schedule is fixed delay of validity of reset password, then you 
may have look a temporary password rules 
https://www.port389.org/docs/389ds/design/otp-password-policy.html 
<https://www.port389.org/docs/389ds/design/otp-password-policy.html>.


regards

3.Since there are already 6K users in DS  with  no password
policy in place I am thinking for start we shall  force and
update each uid userPassword attribute ( running a script in DS),

4.and next step  configure the DS for global password policy
with  the new attributes in place ( which specific attributes
you suggest?)

That is up to you which policies you want to use.

5.and the last step when the users are trying to logging they
must change their passw since their old passwd was removed
already.

If you remove their old password then they can not reset their
password since they can not even log in.  It would need to be done
by a different entry/user.  I do not recommend removing the
userpassword attribute from your entries.

If you want to force all your users to reset their passwords then
you need to set "passwordMustChange" to "on", and set the
passwordExpirationtime to "1970010100Z".  This will force
users to have to reset their passwords /after/ they log in.

6. How is this  design option sounds ?

7.  I assume  for the  new passwd  policy  the following
attributes will need to be configured : *passwordExp* - ,
*passwordMaxAge* , *passwordWarning* ,*passwordMustChange*
*passwordGracelimit* – is this correct ?

If these are the settings you want, then yes.  There is no single
recommendation that fits everyone's needs.

8.

9. The two DSs  are configured in multimaster replication  and
 another  DS acting as slave cfg   in master to slave ( only
reads  accepted) , from what I read will need to configure
each  of the master DS   with  same Password Policy correct ?

Correct

Also see:


https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_replication-replicating-password-attributes

<https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_replication-replicating-password-attributes>

10. How about the slave DS any configuration changes  and
which ones ?

You need to set the password policies the same on /all/ servers,
or else those servers will not enforce the password policies.

HTH,
Mark

11.Thank you

12.Isabella

*From:*Mark Reynolds [mailto:mreyno...@redhat.com
<mailto:mreyno...@redhat.com>]
*Sent:* September 10, 2021 12:38 PM
*To:* General discussion list for the 389 Directory server
project. <389-users@lists.fedoraproject.org>
<mailto:389-users@lists.fedoraproject.org>; Ghiurea, Isabella

<mailto:isabella.ghiu...@nrc-cnrc.gc.ca>
*Subject:* Re: [389-users] global passwd policy for DS wit

[389-users] Re: global passwd policy for DS with existing users

2021-09-14 Thread Thierry Bordaz


On 9/14/21 3:15 PM, Mark Reynolds wrote:



On 9/10/21 5:14 PM, Ghiurea, Isabella wrote:


·Thank you Mark,

· I am considering  the  DS global password Policy with  the 
configuration to have the users  “must” change their passwords 
according to a schedule.


If the schedule is fixed delay of validity of reset password, then you 
may have look a temporary password rules 
https://www.port389.org/docs/389ds/design/otp-password-policy.html.


regards


·Since there are already 6K users in DS  with  no password policy in 
place I am thinking for start we shall  force and update each uid 
userPassword attribute ( running a script in DS),


·and next step  configure the DS for global password policy with the 
new attributes in place ( which specific attributes you suggest?)



That is up to you which policies you want to use.


·and the last step when the users are trying to logging they must 
change their passw since their old passwd was removed already.


If you remove their old password then they can not reset their 
password since they can not even log in.  It would need to be done by 
a different entry/user.  I do not recommend removing the userpassword 
attribute from your entries.


If you want to force all your users to reset their passwords then you 
need to set "passwordMustChange" to "on", and set the 
passwordExpirationtime to "1970010100Z".  This will force users to 
have to reset their passwords /after/ they log in.



· How is this  design option sounds ?

·  I assume  for the  new passwd  policy  the following attributes 
will need to be configured : *passwordExp* - , *passwordMaxAge* , 
*passwordWarning* ,*passwordMustChange* *passwordGracelimit* – is 
this correct ?


If these are the settings you want, then yes.  There is no single 
recommendation that fits everyone's needs.


·

· The two DSs  are configured in multimaster replication  and 
 another  DS acting as slave cfg   in master to slave ( only reads  
accepted) , from what I read will need to configure each  of the 
master DS   with  same Password Policy correct ?



Correct

Also see:

https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_replication-replicating-password-attributes


· How about the slave DS any configuration changes  and which ones ?

You need to set the password policies the same on /all/ servers, or 
else those servers will not enforce the password policies.


HTH,
Mark


·Thank you

·Isabella

*From:*Mark Reynolds [mailto:mreyno...@redhat.com]
*Sent:* September 10, 2021 12:38 PM
*To:* General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>; Ghiurea, Isabella 

*Subject:* Re: [389-users] global passwd policy for DS with existing 
users


/***ATTENTION*** This email originated from outside of the NRC. 
***ATTENTION*** Ce courriel provient de l'extérieur du CNRC/


On 9/10/21 1:46 PM, Ghiurea, Isabella wrote:

Hi List,

I need your expertise  , I am looking to configure global 
password policy for an existing DS with  aprox 7 k users, at
present we are using only the userPassword attribute  , no extra
password plugins or  attributes are  enabled , the DS is running
1.3.7.5-24.el7_5.x86_64

What is the  less intrusive  solution to implement  a  global
Password Policy  and cfg attributes  for all   existing user
accounts  without sending each user emails notification to reset
their password ?  I  understand the Password Policy will take
effect  only after the users passwords  are  reset , is this
correct ?

Depends...

You are not being specific about what password policy you want to 
implement, there are countless variations.  Some require the password 
to be reset to start working, others do not.  So please let us know 
exactly what you want to implement from password policy so we can 
answer your questions.  For example there is password history, 
password expiration, password warning, grace periods, syntax 
checking, account lockout, etc. Each one has its own behavior and 
configuration.


If you are not sure what you want to implement then I recommend 
looking over the admin guide to see more details on the password 
policy options:


https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/user_account_management-managing_the_password_policy 



HTH,

Mark



___

389-users mailing list --389-users@lists.fedoraproject.org  


To unsubscribe send an email to389-users-le...@lists.fedoraproject.org  


Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/  

[389-users] Re: Enabling retro changelog maxage with 3 million entries make dirsrv not respond anymore

2021-09-06 Thread Thierry Bordaz


On 9/6/21 3:40 PM, Kees Bakker wrote:

On 06-09-2021 14:34, Thierry Bordaz wrote:

On 9/6/21 1:55 PM, Kees Bakker wrote:

Hi,

First a bit of context.

CentOS 7, FreeIPA
389-ds-base-snmp-1.3.9.1-13.el7_7.x86_64
389-ds-base-libs-1.3.9.1-13.el7_7.x86_64
389-ds-base-1.3.9.1-13.el7_7.x86_64

A long time ago I was experiencing a deadlock during retro changelog 
cleanup
and I was advised to disable it as a workaround. Disabling was done 
by setting

nsslapd-changelogmaxage to -1. SInce then the number of entries grew to
about 3 million.

Last week I enabled maxage again. I set it to 470 days. I was hoping 
to limit
this pile of old changelog entries., starting by cleaning very old 
entries.


However, what I noticed is that it was removing entries with a pace 
of 16 entries
per second. Meanwhile the server was doing nothing. Server load was 
very low.


The real problem is that dirsrv (LDAP) is not responding to any 
requests anymore. I
had to disable maxage again, which requires patience restarting the 
server when

it is not responding ;-)

Now my questions
1) is it normal dat removing repo changelog entries is slooow?
2) why is dirsrv not responding anymore when the cleanup kicks in?
3) are there alternatives to cleanup the old repo changelog entries?


Hi,

When the server is not responsive, can it process searches like

ldapsearch -b "" -s base ?

ldapsearch -D 'cn=direcrtory manager' -W -b "cn=config" -s base

or ldapsearch D 'cn=direcrtory manager' -W -b "cn=monitor" ?


I'll have to do this when I get a new chance. This LDAP server is
hard coded in several other services, even though we have replica's.
These services will be hanging when I do this.

One thing I can say is that the following command was hanging.

ldap -H ldaps://rotte.example.com -b cn=config
Interesting, I was "guessing" db update+checkpointing+compact being 
responsible of some temporary slowdown.  cn=config backend (in memory) 
being also frozen, the thing I can imagine is that others SRCH requests, 
that go to the db, were frozen by update+checkpoint+compact and there 
was no more workers to process request that are non database related.


Regarding the low rate of trimming, how did you monitor it ? Are you 
using internal op logging, plugin log level or something else ?



Just a rough estimate. After 15 minutes I had to disable maxage again.
Before and after I looked at the oldest entry. That way I saw it removed
about 15000 entries.



So you were able to do online update of maxage. So the server were 
(very) slow but still processing ?




Is there any particular logging you can recommend?
I was concerned that you enable debug level, to monitor the trimming, 
that would be very noisy.


When the server is not responsive, does it consum CPU ? Could you 
collect 'top -H -p `pidof ns-slapd` -b' and some pstack ?



As I said above, I'll have to pick the right moment to do this again.
Last time I got a lot of complaints from the users. :-(



Yes, this can be done during calm period.

thanks
thierry


-- Kees


thanks
thierry


___
389-users mailing list --389-users@lists.fedoraproject.org
To unsubscribe send an email to389-users-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report 
it:https://pagure.io/fedora-infrastructure



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Enabling retro changelog maxage with 3 million entries make dirsrv not respond anymore

2021-09-06 Thread Thierry Bordaz


On 9/6/21 1:55 PM, Kees Bakker wrote:

Hi,

First a bit of context.

CentOS 7, FreeIPA
389-ds-base-snmp-1.3.9.1-13.el7_7.x86_64
389-ds-base-libs-1.3.9.1-13.el7_7.x86_64
389-ds-base-1.3.9.1-13.el7_7.x86_64

A long time ago I was experiencing a deadlock during retro changelog 
cleanup
and I was advised to disable it as a workaround. Disabling was done by 
setting

nsslapd-changelogmaxage to -1. SInce then the number of entries grew to
about 3 million.

Last week I enabled maxage again. I set it to 470 days. I was hoping 
to limit
this pile of old changelog entries., starting by cleaning very old 
entries.


However, what I noticed is that it was removing entries with a pace of 
16 entries
per second. Meanwhile the server was doing nothing. Server load was 
very low.


The real problem is that dirsrv (LDAP) is not responding to any 
requests anymore. I
had to disable maxage again, which requires patience restarting the 
server when

it is not responding ;-)

Now my questions
1) is it normal dat removing repo changelog entries is slooow?
2) why is dirsrv not responding anymore when the cleanup kicks in?
3) are there alternatives to cleanup the old repo changelog entries?


Hi,

When the server is not responsive, can it process searches like

   ldapsearch -b "" -s base ?

   ldapsearch -D 'cn=direcrtory manager' -W -b "cn=config" -s base

   or ldapsearch D 'cn=direcrtory manager' -W -b "cn=monitor" ?

Regarding the low rate of trimming, how did you monitor it ? Are you 
using internal op logging, plugin log level or something else ?


When the server is not responsive, does it consum CPU ? Could you 
collect 'top -H -p `pidof ns-slapd` -b' and some pstack ?


thanks
thierry

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: nsslapd-conntablesize & nsslapd-maxfiledescriptors

2021-09-06 Thread Thierry Bordaz


On 9/5/21 11:45 PM, William Brown wrote:



On 3 Sep 2021, at 23:37, Michael Starling  wrote:

Given the current settings on a directory server I'm still seeing the errors 
below in the logs at peak times.

"ERR - setup_pr_read_pds - Not listening for new connections - too many fds 
open"


nsslapd-reservedescriptors: 64
nsslapd-maxdescriptors: 65535
nsslapd-conntablesize: 8192

At the OS level the ns-slapd process is set to 65535 as well.

Max open files65535


After reading the RHDS documentation it's a bit unclear as to how these 
parameters work together.

The conntablesize documentation states:

"The default value for nsslapd-conntablesize is the systems maxdescriptors which can 
be confiured using nsslapd-maxdescriptors"


The documentation is wrong, conntablesize is cap by process 
maxdescriptors. So I would expect the connection table to  be 8192 as it 
is lower than 65535. Do you know if when the message "too many fds open" 
popup the number of open connections is higher than 8000 ?


regards
thierry



Now we look at the documentation for maxdescriptors:


The number of descriptors available for TCP/IP to serve client connections is 
determined by nsslapd-conntablesize, and is equal to the nsslapd-maxdescriptors 
attribute minus the number of file descriptors used by the server as specified 
in the nsslapd-reservedescriptors attribute for non-client connections, such as 
index management and managing replication. The nsslapd-reservedescriptors 
attribute is the number of file descriptors available for other uses as 
described above.

Based on the numbers currently set does this mean no action needs to be taken 
as this implies maxdescriptors takes precedence over conntablesize?

Or should I set conntablesize to 65535-64 = 65471?

Perhaps there is a bug here if conntablesize is still set. Alternately, it 
could have been set manually and the config upgrade code never kicked in.

It's probably best to increase this a bit carefully, adjust up conntablesize in 
increments of 8192 until you stop having connection issues?

Hope that helps,









3.1.1.60. nsslapd-conntablesize

This attribute sets the connection table size, which determines the total 
number of connections supported by the server.
The server has to be restarted for changes to this attribute to go into effect.
Parameter   Description
Entry DNcn=config
Valid ValuesOperating-system dependent
Default Value   The default value is the system's max descriptors, which can be 
configured using the nsslapd-maxdescriptors attribute as described in Section 
3.1.1.115, “nsslapd-maxdescriptors (Maximum File Descriptors)”
Syntax  Integer
Example nsslapd-conntablesize: 4093
Increase the value of this attribute if Directory Server is refusing 
connections because it is out of connection slots. When this occurs, the 
Directory Server's error log file records the message Not listening for new 
connections -- too many fds open.
A server restart is required for the change to take effect.


Thanks
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

--
Sincerely,

William Brown

Senior Software Engineer, Identity and Access Management
SUSE Labs, Australia

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: WARN - content-sync-plugin

2021-09-01 Thread Thierry Bordaz

Hi Orion,

Nothing alarming just an message logged with a wrong WARN flag. It 
should be INFO or debug rather than WARN. See the discussion 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org/thread/OM6UTEY2VYSM6G6USZIQRCIOLROVASBP/


regards
thierry


On 8/30/21 6:48 PM, Orion Poplawski wrote:

I've started seeing these messages periodically after some update a while
back.  Anything to be concerned about or should I just ignore them?

[29/Aug/2021:09:41:35.471107696 -0700] - WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op - DB retried operation targets "cn=repl keep
alive 24,dc=nwra,dc=com" (op=0x7f32d57f8c00 idx_pl=0) => op not changed in PL
[29/Aug/2021:13:07:11.134986267 -0700] - WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op - DB retried operation targets "cn=repl keep
alive 24,dc=nwra,dc=com" (op=0x7f32d5701400 idx_pl=0) => op not changed in PL
[29/Aug/2021:13:56:35.194828743 -0700] - WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op - DB retried operation targets "cn=repl keep
alive 24,dc=nwra,dc=com" (op=0x7f32d56fe600 idx_pl=0) => op not changed in PL
[29/Aug/2021:15:37:11.119675538 -0700] - WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op - DB retried operation targets
"dc=nwra,dc=com" (op=0x7f32d563cc00 idx_pl=0) => op not changed in PL
[29/Aug/2021:18:13:13.204063186 -0700] - WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op - DB retried operation targets "cn=repl keep
alive 24,dc=nwra,dc=com" (op=0x7f32bcafbe00 idx_pl=0) => op not changed in PL

Thanks.


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Several "DB retried operation targets" messages per day

2021-08-16 Thread Thierry Bordaz


On 8/12/21 4:39 PM, Mark Reynolds wrote:


On 8/12/21 10:21 AM, Kees Bakker wrote:

On 12-08-2021 16:00, Mark Reynolds wrote:

On 8/12/21 9:57 AM, Kees Bakker wrote:

On 12-08-2021 14:21, Mark Reynolds wrote:

On 8/12/21 5:16 AM, William Brown wrote:

hey there,

Some of your messages have been bouncing or being caught in spam
filters due to DMARC/DNS SPF failures. That may be why no one is
answering.


No, this was not filtered.  We have a lot of engineers on PTO at the
moment, and the rest of us are working very hard on customer 
issues and

other important deadlines.  We can't always give timely responses to
community questions...


Thanks Mark,

Sorry to put the pressure on you and the team.



Looking at the code these "warning" messages just mean that a DB 
retry

had occurred and an operation was retried, but that operation was
already in the Content Sync Plugin pending list.  It is a harmless
message, and perhaps should not even be logged at the default logging
level.


OK.
As harmless as it may be, why does it happen to me?

Story of my life...


:-)


Our system is fairly
small scale. The warnings are always triggered due to DNS updates, via
DHCP
updates. Just 100+ DHCP clients. I would expect that you would see
this on every
FreeIPA system. If not, then something might be fishy with our system.


If you had DB deadlock/DB retry errors/warnings in the main 
database, it
could cause these Content Sync warnings as a side effect.  Are you 
still

seeing these warning messages?


No, those deadlocks were a one-time event when we deleted a lot of
members from a user group. I haven't seen deadlocks since.

The DB retried messages just happen a few times per day. I don't see an
obvious pattern.


The engineer who wrote the Content Sync "Pending Llist" code is on 
vacation, when he gets back I'll have him follow up on this. But 
looking at the code everything is functioning properly.  The plugin 
correctly handled a problem with the DB, which is good, but it logged 
that message which is alarming...


I think those warning messages should be moved to plugin logging 
instead of default logging, or change the severity level from WARN to 
INFO.  We'll see...


Mark



I fully agree with Mark explanations and suggested fix.
IPA updates are prone to trigger additional internal updates and to 
create conflicts (deadlock) at DB level. The resolution of those DB 
deadlock is done via retry of the update. In such case the update is 
already present in the pending list and there is no need to add it 
again. This message is erroneously alarming and should be move to INFO 
or only logged with plugin level.


What is the deadlock policy on the two centos systems ? You may try to 
run with nsslapd-db-deadlock-policy=6 (priority to writers) and check if 
those messages still happen


regards
thierry








Mark





Regards,

Mark




On 12 Aug 2021, at 16:57, Kees Bakker  wrote:

Isn't there anyone out there who can comment?


On 04-08-2021 10:50, Kees Bakker wrote:

Hi,

(( This was also reported as an issue at github [1], but there
isn't much activity there. ))

Each day there several messages with "WARN - content-sync-plugin -
sync_update_persist_betxn_pre_op ...". It is unclear why they show
up.

Briefly about the setup.
This is in a IPA deployment. We have three masters/replicas in a
triangular topology, A-B, B-C, C-A.
The systems are called: rotte, linge and iparep4.

rotte is CentOS 7, with 389-ds-base-1.3.9.1-13.el7_7.x86_64
linge and iparep4 are CentOS 8 Stream, with
389-ds-base-1.4.3.23-2.module_el8.5.0+835+5d54734c.x86_64

The messages seem to have a relation to DNS updates triggered by a
DHCP server. These
updates come in on rotte, the CentOS7 system. Next, they get
replicated to the two CentOS
systems.

At least a few times per day, on the two CentOS system I see the
following. Even though
they are just(?) warnings, I still don't like it.


aug 03 15:47:20 iparep4.example.com ns-slapd[485]:
[03/Aug/2021:15:47:20.474879344 +0200] - WARN -
content-sync-plugin - sync_update_persist_betxn_pre_op - DB
retried operation targets "dc=example,dc=com" (op=0x7efdcfc59600
idx_pl=0) => op not changed in PL
aug 03 20:31:12 iparep4.example.com ns-slapd[485]:
[03/Aug/2021:20:31:12.981645921 +0200] - WARN -
content-sync-plugin - sync_update_persist_betxn_pre_op - DB
retried operation targets "changenumber=600749,cn=changelog"
(op=0x7efdf8310200 idx_pl=1) => op not changed in PL
aug 03 20:31:44 iparep4.example.com ns-slapd[485]:
[03/Aug/2021:20:31:44.287299445 +0200] - WARN -
content-sync-plugin - sync_update_persist_betxn_pre_op - DB
retried operation targets "changenumber=600773,cn=changelog"
(op=0x7efe0a763000 idx_pl=1) => op not changed in PL
aug 03 20:36:40 iparep4.example.com ns-slapd[485]:
[03/Aug/2021:20:36:40.890527828 +0200] - WARN -
content-sync-plugin - sync_update_persist_betxn_pre_op - DB
retried operation targets "changenumber=600785,cn=changelog"
(op=0x7efe14014400 idx_pl=1) => op not changed in PL
aug 

[389-users] Re: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock

2021-07-28 Thread Thierry Bordaz

Hi Kees,

Rotte successfully processed the problematic update 
(60fe85350013), updating the database and recording the update 
in the changelog.


Later Rotte tried to replicate the update to linge  but the update 
failed on linge


[26/Jul/2021:11:44:37.947738548 +0200] - ERR - NSMMReplicationPlugin - 
changelog program - _cl5WriteOperationTxn - retry (49) the transaction 
(csn=60fe85350013) failed (rc=-30993 (BDB0068 DB_LOCK_DEADLOCK: 
Locker killed to resolve a deadlock))


Rotte noticed this failure

[26/Jul/2021:11:44:39.055890736 +0200] - WARN - NSMMReplicationPlugin - 
repl5_inc_update_from_op_result - agmt="cn=meTolinge.example.com" 
(linge:389): Consumer failed to replay change (uniqueid 
31283c01-a16511e9-93cf90e8-ab7c8ee8, CSN 60fe85350013): 
Operations error (1). Will retry later


And like mentioned in the log it retried later to replicate the update 
and this time it succeeded. You said the value was correct on all 
replicas. You may confirm that with a 'grep 60fe85350013 
/var/log/dirsrv//access*' => err=1


The reason of the original replication failure (on linge) is possibly 
related to the deadlock policy. By default DS, in case of DB deadlock, 
gives the priority to the youngest transaction and abort the others txn 
to resolve a deadlock. This default value works fine but in case of IPA 
where updates are very often nested (because of many plugins calls) it 
is not optimal. you may try nsslapd-db-deadlock-policy: 6 (priority to 
writers).


DB_LOCK_DEADLOCK is a normal event. The server just retries. In case of 
too many retry, the operation itself fails. Replication just sends again 
the failing operation. ATM your topology looks healthy you may try to 
update the deadlock policy.


Regards
thierry


On 7/28/21 2:10 PM, Kees Bakker wrote:

Hi,

This is in a IPA deployment. We have three masters/replicas in a 
triangular topology, A-B, B-C, C-A.

The systems are called: rotte, linge and iparep4.

rotte is CentOS 7, with 389-ds-base-1.3.9.1-13.el7_7.x86_64
linge and iparep4 are CentOS 8 Stream, with 
389-ds-base-1.4.3.23-2.module_el8.5.0+835+5d54734c.x86_64


Yesterday I removed some members from a user group on rotte. This 
caused the follow errors

on linge (and on iparep4).

Jul 26 11:44:37 linge.example.com ns-slapd[282944]: 
[26/Jul/2021:11:44:37.947738548 +0200] - ERR - NSMMReplicationPlugin - 
changelog program - _cl5WriteOperationTxn - retry (49) the transaction 
(csn=60fe85350013) failed (rc=-30993 (BDB0068 
DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock))
Jul 26 11:44:38 linge.example.com ns-slapd[282944]: 
[26/Jul/2021:11:44:38.000964611 +0200] - ERR - NSMMReplicationPlugin - 
changelog program - _cl5WriteOperationTxn - Failed to write entry with 
csn (60fe85350013); db error - -30993 BDB0068 
DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock
Jul 26 11:44:38 linge.example.com ns-slapd[282944]: 
[26/Jul/2021:11:44:38.025996273 +0200] - ERR - NSMMReplicationPlugin - 
write_changelog_and_ruv - Can't add a change for 
cn=vpn_users,cn=groups,cn=accounts,dc=example,dc=com (uniqid: 
31283c01-a16511e9-93cf90e8-ab7c8ee8, optype: 8) to changelog csn 
60fe85350013
Jul 26 11:44:38 linge.example.com ns-slapd[282944]: 
[26/Jul/2021:11:44:38.062640602 +0200] - ERR - NSMMReplicationPlugin - 
process_postop - Failed to apply update (60fe85350013) error 
(1).  Aborting replication session(conn=53596 op=65)


On rotte

jul 26 11:44:39 rotte.example.com ns-slapd[2705]: 
[26/Jul/2021:11:44:39.055890736 +0200] - WARN - NSMMReplicationPlugin 
- repl5_inc_update_from_op_result - agmt="cn=meTolinge.example.com" 
(linge:389): Consumer failed to replay change (uniqueid 
31283c01-a16511e9-93cf90e8-ab7c8ee8, CSN 60fe85350013): 
Operations error (1). Will retry later.
jul 26 11:44:39 rotte.example.com ns-slapd[2705]: 
[26/Jul/2021:11:44:39.058198988 +0200] - WARN - NSMMReplicationPlugin 
- repl5_inc_update_from_op_result - agmt="cn=meTolinge.example.com" 
(linge:389): Consumer failed to replay change (uniqueid 
31283c01-a16511e9-93cf90e8-ab7c8ee8, CSN 60fe853500330003): 
Operations error(1). Will retry later.
jul 26 11:44:39 rotte.example.com ns-slapd[2705]: 
[26/Jul/2021:11:44:39.069825407 +0200] - ERR - NSMMReplicationPlugin - 
release_replica - agmt="cn=meTolinge.example.com" (linge:389): Unable 
to send endReplication extended operation (Operations error)
jul 26 11:44:46 rotte.example.com ns-slapd[2705]: 
[26/Jul/2021:11:44:46.561562313 +0200] - INFO - NSMMReplicationPlugin 
- bind_and_check_pwp - agmt="cn=meTolinge.example.com" (linge:389): 
Replication bind with GSSAPI auth resumed


As far as I can see the user group is correctly modified on all 
replicas. But it doesn't

look healthy to me.

Is there anything I can do to see what went wrong? Is there something 
to improve

in the configuration?

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To 

[389-users] Re: memberOf Plugin report inconsistent states

2021-07-15 Thread Thierry Bordaz


On 7/15/21 2:56 PM, Tobias Ernstberger wrote:

Hello,

it is well known and documented, that the memberOf attribute can have 
inconsistent states (e.g. by manipulating it directly).

There is also a Fix-Up Task to repair that.

Question: Is there also a way to report/list all current inconsistent 
states, that the Fix-Up Task might repair?


Hello,

No it does not check if there are differences between old/new memberof 
values. It just replace the old valueset with the new one.




Motivation for that is to evaluate the impact of the Fix-Up Task to 
reduce/manage operational risks.
Possible action might be to evaluate manually for all reported issues 
if they need to be cleaned up in the memberOf attribute, or added as 
regular group membership to the group


Do you mean you would like an option to run the fixup task  to report 
invalid memberof setting but without doing any update ? Then later do 
the manual fixup ?


best regards
thierry




Mit freundlichen Grüßen / Kind regards

*Tobias Ernstberger*
IT-Architect Identity and Access Management
IBM Security Expert Labs
+49 151 15138929
tobias.ernstber...@de.ibm.com

IBM *Security*

IBM Deutschland GmbH
Vorsitzender des Aufsichtsrats: Sebastian Krause
Geschäftsführung: Gregor Pillen (Vorsitzender), Agnes Heftberger, 
Gabriele Schwarenthorer, Markus Koerner, Christian Noll, Nicole Reimer
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht 
Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940

https://www.ibm.com/privacy/us/en/ 



___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 2.0.6

2021-06-24 Thread Thierry Bordaz


   389 Directory Server 2.0.6

The 389 Directory Server team is proud to announce 389-ds-base version 2.0.6

Fedora packages are available on Fedora 34 and Rawhide

Fedora 34:

https://koji.fedoraproject.org/koji/taskinfo?taskID=70696310 
 - Koji 
https://bodhi.fedoraproject.org/updates/FEDORA-2021-6cec1584ab 
 - Bodhi


Rawhide:

https://koji.fedoraproject.org/koji/taskinfo?taskID=70730267 
 - Koji


The new packages and versions are:

 * 389-ds-base-2.0.6-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 2.0.6

 * Bug & security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org 



If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base 



 * Bump version to 2.0.6
 * Issue 4803 - Improve DB Locks Monitoring Feature Descriptions
 * Issue 4803 - Improve DB Locks Monitoring Feature Descriptions (#4810)
 * Issue 4169 - UI - Migrate Typeaheads to PF4 (#4808)
 * Issue 4414 - disk monitoring - prevent division by zero crash
 * Issue 4788 - CLI should support Temporary Password Rules
   attributes (#4793)
 * Issue 4656 - Fix replication plugin rename dependency issues
 * Issue 4656 - replication name change upgrade code causes crash with
   dynamic plugins
 * Issue 4506 - Improve SASL logging
 * Issue 4709 - Fix double free in dbscan
 * Issue 4093 - Fix MEP test case
 * Issue 4747 - Remove unstable/unstatus tests (followup) (#4809)
 * Issue 4791 - Missing dependency for RetroCL RFE (#4792)
 * Issue 4794 - BUG - don’t capture container output (#4798)
 * Issue 4593 - Log an additional message if the server certificate
   nickname doesn’t match nsSSLPersonalitySSL value
 * Issue 4797 - ACL IP ADDRESS evaluation may corrupt
   c_isreplication_session connection flags (#4799)
 * Issue 4169 - UI Migrate checkbox to PF4 (#4769)
 * Issue 4447 - Crash when the Referential Integrity log is manually edited
 * Issue 4773 - Add CI test for DNA interval assignment
 * Issue 4789 - Temporary password rules are not enforce with local
   password policy (#4790)
 * Issue 4379 - fixing regression in test_info_disclosure
 * Issue 4379 - Allow more than 1 empty AttributeDescription for
   ldapsearch, without the risk of denial of service
 * Issue 4379 - Allow more than 1 empty AttributeDescription for
   ldapsearch, without the risk of denial of service
 * Issue 4575 Update test docstrings metadata
 * Issue 4753 - Adjust our tests to 389-ds-base-snmp missing in RHEL
   9 Appstream
 * removed the snmp_present() from utils.py as we have
   get_rpm_version() in conftest.py
 * Issue 4753 - Adjust our tests to 389-ds-base-snmp missing in RHEL
   9 Appstream

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Can't locate CSN - replica issue

2021-06-17 Thread Thierry Bordaz


On 6/17/21 2:11 PM, Marco Favero wrote:

Ah, I don't have RA rh5-->dr-rh1.

So, I could setup a RA from all multimaster to dr-rh1 to avoid this kind of 
problems.

I'm not sure to understand. Really, I have a real time RA from rh5 to rh1, and 
from rh1 to rh5. So, if I initialize rh1 from rh5, rh1 should still replicates 
to dr-rh1, because rh5 is always in synch with rh1...


Administrative task are sensitive and some side effect can impact 
replication. When you reinit rh5->rh1, then rh1 can still replicates to 
dr-rh1. However a side effect of reinit is that it clears the changelog 
of rh1. So if dr-rh1 is late, then rh1 having lost history of old 
updates (changelog reset) is able to connect to dr-rh1 but is no longer 
able to find in its cleared changelog, the old update it should start 
replication from.


The workaround is to give rh5 a chance to directly update dr-rh1.

thanks
thierry



Thank you
Marco
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Can't locate CSN - replica issue

2021-06-17 Thread Thierry Bordaz


On 6/17/21 12:58 PM, Marco Favero wrote:

On 6/17/21 10:55 AM, Marco Favero wrote:

Hi Marco,

good to know you fixed the issue. If I read you correctly you fixed it
via setting nsDS5ReplicaHost=FQDN of the consumer host in the
replication agreement supplier->consumer. What is surprising is that it
was working before with a non fqdn and suddenly stopped working.

Hi Thierry,
  not really, sorry, maybe I didn't explain well. I set the "full_machine_name" 
in dscreate with the fqdn of the host runnig the 389ds in place of the fqdn of the 
balancer ip.
It's the nsslapd-localhost parameter, I suppose.


With recent versions, this problem is either transient (a supplier does
not know a CSN showed by the consumer but another supplier that knows
this CSN will eventually update the consumer), either permanent (the
consumer got offline longer than changelog maxage) and you may need to
reinit the consumer.

I still have this issue. Are there conditions that determine this issue yet?
It's as you describe: the only way to exit from that situation is the 
reinitialization.

I have three multimaster each other:

rh1
rh2
rh5

rh has also a scheduled agreement to dr-rh1. So, dr-rh1 is a consumer from rh1.

All is working fine. When I initialize rh1 from rh5, then the replica rh1 --> dr-rh1 
stops to work and says "Error (18) Can't acquire replica (Incremental update transient 
warning. Backing off, will retry update later.)". Log claims that can't find a CSN. All 
other replica are fine.


Doing reinit rh5->rh1, changelog of rh1 gets reset. If for some reason 
dr-rh1 was late compare to rh5, it is normal that rh1 can no longer 
update dr-rh1. Did you setup a RA rh5->dr-rh1 ?


thanks
thierry




So I have to reinitialize rh1 --> dr-rh01.

Thank you very much
Warm Regards
Marco
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: [Freeipa-users] Re: Consumer failed to replay change Operations error (1)

2021-06-17 Thread Thierry Bordaz

Hello Alfred,

If it is IPA deployment I doubt that you hit [1] because it only applies 
on read-only replica (hub/consumer). Also this bug is fixed in the 
version you are running.


The consumer (redactedauth0003.redacted.com 
) 
fails to apply a replicated MOD targeting the admin group. It is not 
clear if the failure occurs at changelog update or RUV update. It is 
looking it is a permanent failure so you may enable replication debug 
log in case it gives more details why it is failing.


regards
thierry



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1574602 



On 6/17/21 11:12 AM, Florence Renaud via FreeIPA-users wrote:
Forwarding to 389-users@lists.fedoraproject.org 
 
as they may have more inputs.


On Wed, Jun 16, 2021 at 11:31 PM Alfred Victor via FreeIPA-users 
> wrote:


Hi FreeIPA,

We have some replication messages in our slapd errors log which
look very like the ones discussed here:

https://bugzilla.redhat.com/show_bug.cgi?id=1574602


I took a look and we do have the MemberOf plugin, but our version
of 389-ds newer:

*389-ds-base-1.3.10.2-10.el7_9.x86_64*

*
*

*
*

Hoping someone might have a suggestion for what we might do to get
rid of these log messages, or what the root cause may be/impact?
They've been going since at least a couple of weeks ago:
*
*

[15/Jun/2021:18:57:26.362094959 -0500] - WARN -
NSMMReplicationPlugin - repl5_inc_update_from_op_result -
agmt="cn=redactedauth0001.redacted.com-to-redactedauth0003.redacted.com
"
(redactedauth0003:389): Consumer failed to replay change
(uniqueid d5896001-39a111eb-8868efc8-91dc0b98, CSN
60c93bc200040025): Operations error (1). Will retry later.




I looked for this same uniqueid (they are ALL the same uniqueID) and found this which 
is interesting and references a specific cn and "optype":


[03/Jun/2021:15:45:43.332068775 -0500] - ERR -
NSMMReplicationPlugin - write_changelog_and_ruv - Can't add a
change for cn=admin,cn=groups,cn=accounts,dc=redacted,dc=com
(uniqid: d5896001-39a111eb-8868efc8-91dc0b98, optype: 8) to
changelog csn 60b93f9300520023



Alfred

___
FreeIPA-users mailing list -- freeipa-us...@lists.fedorahosted.org

To unsubscribe send an email to
freeipa-users-le...@lists.fedorahosted.org

Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines

List Archives:

https://lists.fedorahosted.org/archives/list/freeipa-us...@lists.fedorahosted.org


Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure



___
FreeIPA-users mailing list -- freeipa-us...@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-us...@lists.fedorahosted.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Can't locate CSN - replica issue

2021-06-07 Thread Thierry Bordaz


On 6/7/21 9:39 AM, Marco Favero wrote:

Gasp, I suspect the problem seems to be here. In the agreements I see

dn: cn=it 2--\3E1,cn=replica,cn=c\3Dit,cn=mapping tree,cn=config
objectClass: top
objectClass: nsds5replicationagreement
cn: it 2-->1
cn: it 2--\>1
nsDS5ReplicaRoot: c=it
description: it 2-->1
nsDS5ReplicaHost: srv1.example.com
nsDS5ReplicaPort: 389
nsDS5ReplicaBindMethod: simple
nsDS5ReplicaTransportInfo: LDAP
nsDS5ReplicaBindDN: cn=replication manager,cn=config
nsds50ruv: {replicageneration} 60704f73c350
nsds50ruv: {replica 50001 ldap://srv1.example.com:389} 607424ddc3510
  000 60ba18fbc351
nsds50ruv: {replica 5 ldap://srv.example.com:389} 6074264ac350 6
  0ba190fc350
nsds50ruv: {replica 50002 ldap://srv2.example.com:389} 60742641c3520
  000 60ba1905c352
nsruvReplicaLastModified: {replica 50001 ldap://srv1.example.com:389} 00
  00
nsruvReplicaLastModified: {replica 5 ldap://srv.example.com:389} 000
  0
nsruvReplicaLastModified: {replica 50002 ldap://srv2.example.com:389} 00
  00
nsds5replicareapactive: 0
nsds5replicaLastUpdateStart: 20210604124542Z
nsds5replicaLastUpdateEnd: 20210604124542Z
nsds5replicaChangesSentSinceStartup:: NTAwMDI6NC8wIA==

The replica ID 5 corresponds to the server srv3.example.com, the first host installed 
in a set of three multimaster servers. The balancer host is srv.example.com. As suggested 
by dscreate I put the balancer host in the parameter "full_machine_name" for 
all LDAP servers. For a reason which I don't know the full_machine_name (the load 
balancer host) has been written in the ruv in place of the fqdn of the machine host 
containing the dirsrv installation. In this case, srv.example.com in place of 
srv3.example.com.


Hi marco,

the hostname in the RUV (nsds50ruv) is coming from 'nsslapd-localhost' 
attribute in the 'cn=config' entry (dse.ldif). I am unsure of the impact 
of this erroneously value (srv.example.com instead of srv3.example.com) 
in the RUV.


IMHO what is important for the RA to start a replication session is 
nsds5ReplicaHost and replicageneration. Of course it would be better 
that hosts are valid in RUV element but not sure it explains that 
srv1->srv3 stopped working.


If you can reproduce the problem, I would recommend that you enable 
replication logging (nsslapd-errorlog-level: 8192) on both side (srv1 
and srv3) and reproduce the failure of the RA. Then isolated from access 
logs and error logs the replication session that fails.


regards
thierry



I suspect that if I reinstall all servers with their hostname in 
"full_machine_name" I resolve my issue.

Any idea?

Thank you very much
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: how to configure cn attribute case sensitive

2021-04-27 Thread Thierry Bordaz


On 4/27/21 5:38 AM, William Brown wrote:



On 27 Apr 2021, at 09:42, Mark Reynolds  wrote:


On 4/26/21 3:34 PM, Ghiurea, Isabella wrote:

Hi List,
I need help with the following  ldap   issue , we are running
389-ds-base-1.3.7.5-24.el7_5.x86_64
  
-how to check if 389-DS  is cfg to be case sensitive?

- how  to cfg the cn attribute  which is indexed in my DS   to be case 
sensitive ?

Sorry, you can't (shouldn't).  "cn" is a standard attribute with a predefined syntax.  
"cn" is used internally by the server for many things, and it is expected to be case insensitive.  
Making it case-sensitive could break things in ways that would be very difficult to troubleshoot.  You should 
never attempt to modify the server's core schema.  Especially "cn" - just look at all the entries 
under cn=config...

I completely agree with Mark here. You should probably define a new custom 
attribute instead that has the rules you need.


I also agree that changing a matching rule of a standard attribute is 
not a good idea.


In case you want to do SRCH with 'cn' being case sensitive you may use 
extensible syntax of the filter like:


   # search with 'cn' using its default equality matching rule (case
   insensitive)
   ldapsearch -LLL ... -b 'ou=people,dc=example,dc=com' '(cn=demo user)'
   dn: uid=demo_user,ou=people,dc=example,dc=com
   objectClass: top
   objectClass: nsPerson
   objectClass: nsAccount
   objectClass: nsOrgPerson
   objectClass: posixAccount
   uid: demo_user
   cn: Demo User
   displayName: Demo User
   legalName: Demo User Name
   uidNumber: 8
   gidNumber: 8
   homeDirectory: /var/empty
   loginShell: /bin/false

   # search with 'cn' using exact MR and the exact case of the 'cn' value
   ldapsearch -LLL -h localhost -p 38901 -D 'cn=Directory Manager' -w
   password -b 'ou=people,dc=example,dc=com' '(cn:caseExactMatch:=Demo
   User)'
   dn: uid=demo_user,ou=people,dc=example,dc=com
   objectClass: top
   objectClass: nsPerson
   objectClass: nsAccount
   objectClass: nsOrgPerson
   objectClass: posixAccount
   uid: demo_user
   cn: Demo User
   displayName: Demo User
   legalName: Demo User Name
   uidNumber: 8
   gidNumber: 8
   homeDirectory: /var/empty
   loginShell: /bin/false

   # the same search with exact MR but with a assertion value that
   differs from attribute value
   # returns no entry
   ldapsearch -LLL...-b 'ou=people,dc=example,dc=com'
   '(cn:caseExactMatch:=demo user)'


Note that if you are willing to us extensible search with exact MR, it 
would also be good to index 'cn' with this MR (else you will trigger 
unindexed search).


regards
thierry





Regards,

Mark


Thank you
Isabella
  



___
389-users mailing list --
389-users@lists.fedoraproject.org

To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org

Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines

List Archives:
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure

--

389 Directory Server Development Team

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 2.0.4

2021-04-07 Thread thierry bordaz


   389 Directory Server 2.0.4

The 389 Directory Server team is proud to announce 389-ds-base version 2.0.4

Fedora packages are available on Fedora 34 and Rawhide

Fedora 34:

https://koji.fedoraproject.org/koji/taskinfo?taskID=65380611 
 - Koji


https://bodhi.fedoraproject.org/updates/FEDORA-2021-123ca32c27 
 - Bohdi


The new packages and versions are:

 * 389-ds-base-2.0.4-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 2.0.4

 * Bug & security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * Bump version to 2.0.4
 * Issue 4680 - 389ds coredump (@389ds/389-ds-base-nightly) in replica
   install with CA (#4715)
 * Issue 3965 - RFE - Implement the Password Policy attribute
   “pwdReset” (#4713)
 * Issue 4700 - Regression in winsync replication agreement (#4712)
 * Issue 3965 - RFE - Implement the Password Policy attribute
   “pwdReset” (#4710)
 * Issue 4169 - UI - migrate monitor tables to PF4
 * issue 4585 - backend redesign phase 3c - dbregion test removal (#4665)
 * Issue 2736 - remove remaining perl references
 * Issue 2736 - https://github.com/389ds/389-ds-base/issues/2736
 * Issue 4706 - negative wtime in access log for CMP operations
 * Issue 3585 - LDAP server returning controltype in different sequence
 * Issue 4127 - With Accounts/Account module delete fuction is not
   working (#4697)
 * Issue 4666 - BUG - cb_ping_farm can fail with anonymous binds
   disabled (#4669)
 * Issue 4671 - UI - Fix browser crashes
 * Issue 4169 - UI - Add PF4 charts for server stats
 * Issue 4648 - Fix some issues and improvement around CI tests (#4651)
 * Issue 4654 Updates to tickets/ticket48234_test.py (#4654)
 * Issue 4229 - Fix Rust linking
 * Issue 4673 - Update Rust crates
 * Issue 4658 - monitor - connection start date is incorrect
 * Issue 4169 - UI - migrate modals to PF4
 * Issue 4656 - remove problematic language from ds-replcheck
 * Issue 4459 - lib389 - Default paths should use dse.ldif if the
   server is down
 * Issue 4656 - Remove problematic language from UI/CLI/lib389
 * Issue 4661 - RFE - allow importing openldap schemas (#4662)
 * Issue 4659 - restart after openldap migration to enable plugins (#4660)
 * Merge pull request #4664 from mreynolds389/issue4663
 * issue 4552 - Backup Redesign phase 3b - use dbimpl in replicatin
   plugin (#4622)
 * Issue 4643 - Add a tool that generates Rust dependencies for a
   specfile (#4645)
 * Issue 4646 - CLI/UI - revise DNA plugin management
 * Issue 4644 - Large updates can reset the CLcache to the beginning of
   the changelog (#4647)
 * Issue 4649 - crash in sync_repl when a MODRDN create a cenotaph (#4652)
 * Issue 4169 - UI - Migrate alerts to PF4
 * Issue 4169 - UI - Migrate Accordians to PF4 ExpandableSection
 * Issue 4595 - Paged search lookthroughlimit bug (#4602)
 * Issue 4169 - UI - port charts to PF4
 * Issue 2820 - Fix CI test suite issues
 * Issue 4513 - CI - make acl ip address tests more robust

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 1.4.4.15

2021-04-06 Thread thierry bordaz


   389 Directory Server 1.4.4.15

The 389 Directory Server team is proud to announce 389-ds-base version 
1.4.4.15


Fedora packages are available on Fedora 33.

Fedora 33:

https://koji.fedoraproject.org/koji/taskinfo?taskID=65298461 
 - Koji


https://bodhi.fedoraproject.org/updates/FEDORA-2021-57ef97888c 
 - Bohdi


The new packages and versions are:

 * 389-ds-base-1.4.4.15-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 1.4.4.15

 * Bug and Security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * Bump version to 1.4.4.15
 * Issue 4700 - Regression in winsync replication agreement (#4712)
 * Issue 2736 - https://github.com/389ds/389-ds-base/issues/2736
 * Issue 4706 - negative wtime in access log for CMP operations

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: dsconf idempotency

2021-03-26 Thread thierry bordaz


Hi Marco,

I agree with you that the command setting the attributes to the same 
existing values should not fail.
Output could differ from "Successfully changed ..." to let us know that 
no MOD were applied but IMHO it should succeeds as well.


Please would you open a new bug 
(https://github.com/389ds/389-ds-base/issues/new/choose) ?


regards
thierry
On 3/25/21 12:42 PM, Marco Favero wrote:

Hello,

  I like to use 
[dsconf](https://directory.fedoraproject.org/docs/389ds/design/dsadm-dsconf.html)
 to manage my 389ds instances.

I like also Ansible to manage the configuration. From Ansible, if I run dsconf 
command I see some problems of idempotency.

For example, if I run the first time in a new fresh installation

```
dsconf -D cn=Directory Manager -w 
 ldap://localhost:389 plugin attr-uniq set attribute
 uniqueness --subtree=c=en --enabled=on --attr-name=uid
 --across-all-subtrees=off
```

it returns 0 and the output

*Successfully changed the cn=attribute uniqueness,cn=plugins,cn=config*.  If I 
re run the same command I will see:

*There is nothing to set in the cn=attribute
 uniqueness,cn=plugins,cn=config plugin entry*

and the exit status is 1.

Of course I can manage the output in Ansible in order to reclassify as well the 
task result. But I have to do that in a lot of cases (best effort).

Of course I can use some idempotent ldapmodify module, but I like to trust 
`dsconf`.

So I wonder if you could consider the benefit to make `dsconf` more idempotent.
For instance, in the above case the exit status could be 0. The same behavior could be adopted in 
all results of "already exists" output messages when the value to set is equal to the 
value already present (ie: `dsconf -D cn=Directory Manager -w ***  ldap://localhost:389 backend 
index add ...` returns "already exists" and the exit status 1 if the idex is already 
defined)

If you have any other hints to address this problem could let me know.

Thank you very much
Kind Regards
Marco
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 1.4.4.14

2021-03-19 Thread thierry bordaz


   389 Directory Server 1.4.4.14

The 389 Directory Server team is proud to announce 389-ds-base version 
1.4.4.14


Fedora packages are available on Fedora 33.

Fedora 33:

https://koji.fedoraproject.org/koji/taskinfo?taskID=64115273 
 - Koji


https://bodhi.fedoraproject.org/updates/FEDORA-2021-3eed313617 
 - Bohdi


The new packages and versions are:

 * 389-ds-base-1.4.4.14-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 1.4.4.14

 * Bug and Security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * Bump version to 1.4.4.14
 * Issue 4671 - UI - Fix browser crashes
 * Issue 4229 - Fix Rust linking
 * Issue 4658 - monitor - connection start date is incorrect
 * Issue 4656 - Make replication CLI backwards compatible with role
   name change
 * Issue 4656 - Remove problematic language from UI/CLI/lib389
 * Issue 4459 - lib389 - Default paths should use dse.ldif if the
   server is down
 * Issue 4661 - RFE - allow importing openldap schemas (#4662)
 * Issue 4659 - restart after openldap migration to enable plugins (#4660)
 * Issue 4663 - CLI - unable to add objectclass/attribute without x-origin
 * Issue 4169 - UI - updates on the tuning page are not reflected in the UI
 * Issue 4588 - BUG - unable to compile without xcrypt (#4589)
 * Issue 4513 - Fix replication CI test failures (#4557)
 * Issue 4646 - CLI/UI - revise DNA plugin management
 * Issue 4644 - Large updates can reset the CLcache to the beginning of
   the changelog (#4647)
 * Issue 4649 - crash in sync_repl when a MODRDN create a cenotaph (#4652)
 * Issue 4513 - CI - make acl ip address tests more robust
 * Issue 4619 - remove pytest requirement from lib389
 * Issue 4615 - log message when psearch first exceeds max threads per conn

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 1.4.3.22

2021-03-19 Thread thierry bordaz


   389 Directory Server 1.4.3.22

The 389 Directory Server team is proud to announce 389-ds-base version 
1.4.3.22


Fedora packages are available on Fedora 32.

https://koji.fedoraproject.org/koji/taskinfo?taskID=64103798 
 - Fedora 32


https://bodhi.fedoraproject.org/updates/FEDORA-2021-35654cb13a 
 - Bodhi


The new packages and versions are:

 * 389-ds-base-1.4.3.22-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 1.4.3.22

 * Bug and Security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 New UI Progress (Cockpit plugin)

The new UI is complete and QE tested.


 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * Bump version to 1.4.3.22
 * Issue 4671 - UI - Fix browser crashes
 * lib389 - Add ContentSyncPlugin class
 * Issue 4656 - lib389 - fix cherry pick error
 * Issue 4229 - Fix Rust linking
 * Issue 4658 - monitor - connection start date is incorrect
 * Issue 2621 - lib389 - backport ds_supports_new_changelog()
 * Issue 4656 - Make replication CLI backwards compatible with role
   name change
 * Issue 4656 - Remove problematic language from UI/CLI/lib389
 * Issue 4459 - lib389 - Default paths should use dse.ldif if the
   server is down
 * Issue 4663 - CLI - unable to add objectclass/attribute without x-origin

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: Finding cause of 389ds sefault crash

2021-03-18 Thread thierry bordaz

Hi,

By any chance do you know if the crash (SIGSEV) dumped a core ?
In such case you may install debuginfo rpm and analyze (gdb) the reason 
of the crash.


I am not sure the crash is due to a DB corruption/breakage but clearly 
the crash will trigger a recovery.

Is the suffix (userRoot) replicated ? is it a supplier or a hub ?
I have the feeling it crashed while compacting the changelog. 
bdb_db_compact_one_db is possibly missing a test that the 'db' 
(changelog) exists before dereferencing it.


regards
thierry

On 3/18/21 8:08 AM, Nelson Bartley wrote:

Good afternoon

Our ns-slapd crashed earlier today, with a segfault in libback-ldbm.so
while the system was running a bdb_db_compact_one_db action.

Is there anyway to trackdown/diagnose what might have caused the
segfault? Some type of DB integrity check or something?

Nelson

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 1.4.3.21

2021-03-05 Thread thierry bordaz


   389 Directory Server 1.4.3.21

The 389 Directory Server team is proud to announce 389-ds-base version 
1.4.3.21


Fedora packages are available on Fedora 32.

https://koji.fedoraproject.org/koji/taskinfo?taskID=63077711 
 - Fedora 32


The new packages and versions are:

 * 389-ds-base-1.4.3.21-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 1.4.3.21

 * Bug fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 New UI Progress (Cockpit plugin)

The new UI is complete and QE tested.


 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * Bump version to 1.4.3.21
 * Issue 4169 - UI - updates on the tuning page are not reflected in the UI
 * Issue 4588 - BUG - unable to compile without xcrypt (#4589)
 * Issue 4513 - Fix replication CI test failures (#4557)
 * Issue 4646 - CLI/UI - revise DNA plugin management
 * Issue 4644 - Large updates can reset the CLcache to the beginning of
   the changelog (#4647)
 * Issue 4649 - crash in sync_repl when a MODRDN create a cenotaph (#4652)
 * Issue 4615 - log message when psearch first exceeds max threads per conn

--

389 Directory Server Development Team


___
389-announce mailing list --389-annou...@lists.fedoraproject.org
To unsubscribe send an email to389-announce-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-annou...@lists.fedoraproject.org
Do not reply to spam on the list, report 
it:https://pagure.io/fedora-infrastructure

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 1.4.3.19

2021-02-11 Thread thierry bordaz


   389 Directory Server 1.4.3.19

The 389 Directory Server team is proud to announce 389-ds-base version 
1.4.3.19


Fedora packages are available on Fedora 32.

https://koji.fedoraproject.org/koji/taskinfo?taskID=61767145 
 - Fedora 32


https://bodhi.fedoraproject.org/updates/FEDORA-2021-e55a8d7545 
 - Bodhi


The new packages and versions are:

 * 389-ds-base-1.4.3.19-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 1.4.3.19

 * Bug and Security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 New UI Progress (Cockpit plugin)

The new UI is complete and QE tested.


 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org


If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base


 * bump version to 1.4.3.19
 * Issue 4609 - CVE - info disclosure when authenticating
 * Issue 4581 - A failed re-indexing leaves the database in broken
   state (#4582)
 * Issue 4579 - libasan detects heap-use-after-free in URP test (#4584)
 * Issue 4563 - Failure on s390x: ‘Fails to split RDN “o=pki-tomcat-CA”
   into components’ (#4573)
 * Issue 4526 - sync_repl: when completing an operation in the pending
   list, it can select the wrong operation (#4553)
 * Issue 4324 - Performance search rate: change entry cache monitor to
   recursive pthread mutex (#4569)
 * Issue 5442 - Search results are different between RHDS10 and RHDS11
 * Issue 4548 - CLI - dsconf needs better root DN access control
   plugin validation
 * Issue 4513 - Fix schema test and lib389 task module (#4514)
 * Issue 4534 - libasan read buffer overflow in filtercmp (#4541)

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: ERR - _entryrdn_insert_key - Same DN (dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=cesnet,dc=cz) is already in the ,entryrdn file with different ID 10458. Expected ID is 10459

2021-01-18 Thread thierry bordaz



On 1/18/21 5:04 PM, Jan Tomasek wrote:

Hi Thierry,

On 15. 01. 21 11:06, thierry bordaz wrote:

Would you be able to run those commands:

dbscan -f /var/lib/dirsrv//db/cesnet_cz /nsuniqueid.db -k 
=fff-fff-fff-fff -r =fff-fff-fff-fff


This seqfaults:

root@cml3:~# dbscan -f 
/var/lib/dirsrv/slapd-cml3/db/test/nsuniqueid.db -k 
=fff-fff-fff-fff -r =fff-fff-fff-fff

Can't find key '=fff-fff-fff-fff'
Segmentation fault

strace:

openat(AT_FDCWD, "/var/lib/dirsrv/slapd-cml3/db/test/nsuniqueid.db", 
O_RDONLY) = 3

fcntl(3, F_GETFD)   = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
fstat(3, {st_mode=S_IFREG|0600, st_size=16384, ...}) = 0
mmap(NULL, 16384, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f51149b3000
fstat(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(0x88, 0x1), ...}) = 0
write(1, "Can't find key '=fff-fff"..., 50Can't find key 
'=fff-fff-fff-fff'

) = 50
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, 
si_addr=0x7fff3c00} ---

+++ killed by SIGSEGV +++
Segmentation fault


My fault the key (-k) was missing some 'f' it should be
dbscan -f /var/lib/dirsrv/slapd-cml3/db/test/nsuniqueid.db  -k 
=--- -r





I've created simple test suffix (see ldif) and problem persist :(

Error is now:
[18/Jan/2021:15:36:07.639103043 +0100] - ERR - _entryrdn_insert_key - 
Same DN (dn: nsuniqueid=---,dc=test) 
is already in the entryrdn file with different ID 4.  Expected ID is 6.
[18/Jan/2021:15:36:07.639405490 +0100] - ERR - index_addordel_entry - 
database index operation failed BAD 1023, err= Unknown error 
[18/Jan/2021:15:36:07.794625784 +0100] - ERR - NSMMReplicationPlugin - 
_replica_configure_ruv - Failed to create replica ruv tombstone entry 
(dc=test); LDAP error - 1
[18/Jan/2021:15:36:07.794954251 +0100] - ERR - NSMMReplicationPlugin - 
replica_new - Unable to configure replica dc=test:


I tried that (on master branch) but did not produce this failure during 
reindex.




root@cml3:~# dbscan -f /var/lib/dirsrv/slapd-cml3/db/test/nsuniqueid.db
=d5658282-599911eb-af359663-f13d537d
=d5658283-599911eb-af359663-f13d537d
=d5658284-599911eb-af359663-f13d537d
=d5658285-599911eb-af359663-f13d537d

root@cml3:~# dbscan -f /var/lib/dirsrv/slapd-cml3/db/test/id2entry.db 
-K 4

id 4
rdn: nsuniqueid=---
objectClass: top
objectClass: nsTombstone
objectClass: extensibleobject
nsUniqueId: ---
nsds50ruv: {replicageneration} 60059bd30001
nsds50ruv: {replica 1 ldap://cml3.cesnet.cz:389} 
60059bdd00020001 60059c66

 0001
dc: test
nscpEntryDN: dc=test
nsruvReplicaLastModified: {replica 1 ldap://cml3.cesnet.cz:389} 
60059c66
nsds5agmtmaxcsn: 
dc=test;test-ldap31;ldap31.cesnet.cz;636;65535;60059c6600

 01
nsds5agmtmaxcsn: 
dc=test;test-ldap32;ldap32.cesnet.cz;636;65535;60059c6600

 01



The entry (RUV) 'nsuniqueid=---,dc=test' 
may take sometime to appear. The time for the replica to flush the in 
memory RUV on a DB entry.


root@cml3:~# dbscan -f /var/lib/dirsrv/slapd-cml3/db/test/id2entry.db 
-K 6
Can't set cursor to returned item: BDB0073 DB_NOTFOUND: No matching 
key/data pair found

free(): invalid pointer
Aborted

After I run reindex on backend:
# root@cml3:~# dsctl cml3 db2index test

fff... entry shows in nsuniqueid.db

root@cml3:~# dbscan -f /var/lib/dirsrv/slapd-cml3/db/test/nsuniqueid.db
=d5658282-599911eb-af359663-f13d537d
=d5658283-599911eb-af359663-f13d537d
=d5658284-599911eb-af359663-f13d537d
=d5658285-599911eb-af359663-f13d537d
=---


At this step, db2index and restart did not generate the 
'_entryrdn_insert_key' error message.


Now is server able to start. Need reinitialization of both replicas 
and after reinitialization works. Untill next complete reindex. ;)


I've tested once again with fresh db. record rdn: 
nsuniqueid=--- appears in 
nsuniqueid.db after reinitialization of both replicas is completed.

Yes there is a small delay before it appears




Isn't my problem related to this: 
https://github.com/389ds/389-ds-base/issues/273 ?


My system is Debian Buster and 389 DS is in version 1.4.4.9 taken from 
Debian Bullseye. If I can provide some more debug info please let me 
know.


Having apply the same steps without that bug, I think 1.4.4.9 is likely 
missing some fixes vs master branch.

I do not recall recent (1.4.x) problem around reindex

regards
thierry


I hope I can operate servers this without doing reindex on all 
attributes, but it would be nice if this will be fixed.


Thanks

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an e

[389-users] Re: ERR - _entryrdn_insert_key - Same DN (dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=cesnet,dc=cz) is already in the ,entryrdn file with different ID 10458. Expected ID is 10459

2021-01-15 Thread thierry bordaz

Hi Jan,

Would you be able to run those commands:

dbscan -f /var/lib/dirsrv//db/cesnet_cz /nsuniqueid.db -k 
=fff-fff-fff-fff -r =fff-fff-fff-fff


then for each ID
dbscan -f /var/lib/dirsrv//db/cesnet_cz /id2entry.db -K 

thanks it could help to diagnose.
regards
thierry

On 1/15/21 9:56 AM, Jan Tomasek wrote:

Hi Mark,
On 13. 01. 21 17:33, Mark Reynolds wrote:
This is definitely an older version of the server, I would highly 
suggest to get onto the latest 1.4.x version that you can.  1.4.0 has 
not been maintained in a very long time, and is missing important fixes.


I've upgraded to version 1.4.4.9 which is present in upcomming Debian 
stable (Bullseye). The result is even worse.


After reindexing:

[14/Jan/2021:16:47:17.866756854 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: aci
[14/Jan/2021:16:47:17.867468013 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: cn
[14/Jan/2021:16:47:17.868599567 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing entryrdn
[14/Jan/2021:16:47:17.869738128 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: entryusn
[14/Jan/2021:16:47:17.870441180 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: givenName
[14/Jan/2021:16:47:17.873042341 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: mail
[14/Jan/2021:16:47:17.874059327 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: mailAlternateAddress
[14/Jan/2021:16:47:17.874611626 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: mailHost
[14/Jan/2021:16:47:17.875048657 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: member
[14/Jan/2021:16:47:17.875445102 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: memberOf
[14/Jan/2021:16:47:17.876030086 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: nsCertSubjectDN
[14/Jan/2021:16:47:17.876506348 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: nscpEntryDN
[14/Jan/2021:16:47:17.877063072 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: nsds5ReplConflict
[14/Jan/2021:16:47:17.877463986 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: nsTombstoneCSN
[14/Jan/2021:16:47:17.877869952 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: nsuniqueid
[14/Jan/2021:16:47:17.878523994 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: ntUniqueId
[14/Jan/2021:16:47:17.878950068 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: ntUserDomainId
[14/Jan/2021:16:47:17.879324003 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: numsubordinates
[14/Jan/2021:16:47:17.879937054 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: objectclass
[14/Jan/2021:16:47:17.880652956 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: owner
[14/Jan/2021:16:47:17.881298947 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: parentid
[14/Jan/2021:16:47:17.881917015 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: seeAlso
[14/Jan/2021:16:47:17.882350399 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: sn
[14/Jan/2021:16:47:17.883025762 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: targetuniqueid
[14/Jan/2021:16:47:17.883440145 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: telephoneNumber
[14/Jan/2021:16:47:17.884134175 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: uid
[14/Jan/2021:16:47:17.884760406 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexing attribute: uniquemember
[14/Jan/2021:16:47:18.749491112 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 1000 entries (9%).
[14/Jan/2021:16:47:19.528484588 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 2000 entries (19%).
[14/Jan/2021:16:47:20.040531342 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 3000 entries (28%).
[14/Jan/2021:16:47:20.769555937 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 4000 entries (38%).
[14/Jan/2021:16:47:21.403762300 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 5000 entries (47%).
[14/Jan/2021:16:47:22.134055315 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 6000 entries (57%).
[14/Jan/2021:16:47:22.861718595 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 7000 entries (66%).
[14/Jan/2021:16:47:23.455932352 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 8000 entries (76%).
[14/Jan/2021:16:47:24.105353501 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 9000 entries (86%).
[14/Jan/2021:16:47:24.603336412 +0100] - INFO - bdb_db2index - 
cesnet_cz: Indexed 1 entries (95%).
[14/Jan/2021:16:47:24.749496424 +0100] - INFO - bdb_db2index - 
cesnet_cz: Finished indexing.
[14/Jan/2021:16:47:47.778127870 +0100] - ERR - _entryrdn_insert_key - 
Same DN (dn: 
nsuniqueid=---,dc=cesnet,dc=cz) is 
already in the entryrdn file with different ID 10454.  Expected ID is 
10456.
[14/Jan/2021:16:47:47.778321609 +0100] - ERR - 

[389-users] Re: Max number of users in a group?

2020-10-23 Thread thierry bordaz


Hi

I would also suggest to check that it exists indexes for the membership 
attributes (pres, eq). Also for memberof if memberof plugin is enabled. 
If referential integrity is enable you would also index for substrings.


regards
thierry
On 10/23/20 1:32 AM, William Brown wrote:

Some work was done a few years back by a user who was storing a similar scale of users in 
their directory. They noticed some delays in replication but those issues were resolved. 
So hopefully it "just works".

If you were going to have any issues with this, it would be:

* network response sizes (since the groups are large it will block the 
connection)
* replication delays (to sort/manage the content)
* update/write delays

So I'd test this with a development environment if I were you, but as 
mentioned, since there are already users doing this, hopefully there are no 
hidden traps for you :)


On 23 Oct 2020, at 08:32, murma...@hotmail.com wrote:

We have a two machine 389DS multimaster cluster holding about 850.000 users. 
It's been working great for over three years now.

But we are creating some big groups, that will have about 150.000 users in them.

I've read in the list some posts about groups larger than that.

But I would like to know if there is any limit or precaution when working with 
groups this size?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] Re: OS err 12 - Cannot allocate memory

2020-10-09 Thread thierry bordaz



On 10/9/20 11:10 AM, Jan Kowalsky wrote:

Hey,

thanks so much for your answers.


When restarting dirsrv we find in logs:

libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
mmap in opening database environment failed trying to allocate 50
bytes. (OS err 12 - Cannot allocate memory)

Same error, if we run dbverify.

We are running version 3.5.17 of 389-ds on debian stretch:

389-ds1.3.5.17-2

Ram doesn't seem to be the problem. Only 200 MB of 4GB is used.

I started with strace - but there are no actionable messages: I get a
schema error - but this is not causal (it has to be fixed anyway...):

rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [INT CHLD], 8) = 0
rt_sigprocmask(SIG_SETMASK, [INT CHLD], NULL, 8) = 0
clone(child_stack=NULL,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7f851e3c69d0) = 27590
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGINT, {sa_handler=0x449930, sa_mask=[],
sa_flags=SA_RESTORER, sa_restorer=0x7f851da16060}, {sa_handler=SIG_DFL,
sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f851da16060}, 8) = 0
wait4(-1, [09/Oct/2020:10:27:10.365741323 +0200] attr_syntax_create -
Error: the EQUALITY matching rule [caseIgnoreIA5Match] is not compatible
with the syntax [1.3.6.1.4.1.1466.115.121.1.15] for the attribute
[dknFasPickupRule]
[09/Oct/2020:10:27:10.420693888 +0200] attr_syntax_create - Error: the
SUBSTR matching rule [caseIgnoreIA5SubstringsMatch] is not compatible
with the syntax [1.3.6.1.4.1.1466.115.121.1.15] for the attribute
[dknFasPickupRule]
0x7eb57b60, 0, NULL)  = ? ERESTARTSYS (To be restarted if
SA_RESTART is set)


This schema error will prevent the startup but does not explain the DB 
error.
You may fix schema either defining dknFasPickupRule with 
syntax1.3.6.1.4.1.1466.115.121.1.26, or switching MR to

EQUALITY  caseIgnoreMatch / SUBSTR caseIgnoreSubstringsMatch.

Any other errors in error logs ?

--- SIGWINCH {si_signo=SIGWINCH, si_code=SI_KERNEL} ---
wait4(-1, [09/Oct/2020:10:27:11.606290855 +0200] libdb: BDB2034 unable
to allocate memory for mutex; resize mutex region
[09/Oct/2020:10:27:12.331303940 +0200] mmap in opening database
environment failed trying to allocate 50 bytes. (OS err 12 - Cannot
allocate memory)
[09/Oct/2020:10:27:12.339630631 +0200] verify DB - dbverify: Failed to
init database
[{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 27590



Given this is mmap and not malloc, is it possible you are hitting something 
like vm.max_map_count? I'm not sure what memory chunk size it's allocating but 
you could increase this parameter to see if that makes space for your mmap 
calls to function.

The other things to check are ulimits and cgroups if you have any of those 
limits set in your system,




Also what I did: checked
   vm.max_map_count (increased to vm.max_map_count = 524288)
   ulimit (unlimited)

Without success.


Could you share the DB tuning entry (cn=config,cn=ldbm 
database,cn=plugins,cn=config).
Also looking at the access/error logs can you identify some operations that 
contributed to this error ?


My DB tuning entries:

dn: cn=config,cn=ldbm database,cn=plugins,cn=config
objectClass: top
objectClass: extensibleObject
cn: config
nsslapd-lookthroughlimit: 5000
nsslapd-mode: 600
nsslapd-idlistscanlimit: 4000
nsslapd-directory: /var/lib/dirsrv/slapd-ldap1/db

Any AVC when ns-slapd access /var/lib/dirsrv/slapd-ldap1/db?


nsslapd-dbcachesize: 50
nsslapd-db-logdirectory: /var/lib/dirsrv/slapd-ldap1/db
nsslapd-db-durable-transaction: on
nsslapd-db-checkpoint-interval: 60
nsslapd-db-compactdb-interval: 2592000
nsslapd-db-transaction-batch-val: 0
nsslapd-db-transaction-batch-min-wait: 50
nsslapd-db-transaction-batch-max-wait: 50
nsslapd-db-logbuf-size: 0
nsslapd-db-locks: 1
nsslapd-db-private-import-mem: on
nsslapd-import-cache-autosize: -1
nsslapd-import-cachesize: 0
nsslapd-idl-switch: new
nsslapd-search-bypass-filter-test: on
nsslapd-search-use-vlv-index: on
nsslapd-exclude-from-export: entrydn entryid dncomp parentid
numSubordinates t
  ombstonenumsubordinates entryusn
nsslapd-serial-lock: on
nsslapd-subtree-rename-switch: on
nsslapd-pagedlookthroughlimit: 0
nsslapd-pagedidlistscanlimit: 0
nsslapd-rangelookthroughlimit: 5000
nsslapd-backend-opt-level: 1
nsslapd-db-deadlock-policy: 9
numSubordinates: 1

It doesn't matter what value I use for nsslapd-dbcachesize. It's always
exactly the size which is referenced in the error message: "failed
trying to allocate  bytes".

Since we have replication and an other ldap which is up to date, I just
reverted the server to an earlier snapshot state where dirsrv started
without problems. I did this already one or two years ago. But of 

[389-users] Re: OS err 12 - Cannot allocate memory

2020-10-07 Thread thierry bordaz

Hi,

Tuning of DBD #mutex is not possible and BDB uses a default value based 
on #hash buckets.
This error is quite rare and I have no explanation why it happened in 
your deployment.


Could you share the DB tuning entry (cn=config,cn=ldbm 
database,cn=plugins,cn=config).
Also looking at the access/error logs can you identify some operations 
that contributed to this error ?


best regards
thierry

On 10/7/20 9:39 AM, Jan Kowalsky wrote:

Hi all,

suddenly one of our ldap-servers crashed and don't restart.

When restarting dirsrv we find in logs:

libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
mmap in opening database environment failed trying to allocate 50
bytes. (OS err 12 - Cannot allocate memory)

Same error, if we run dbverify.

We are running version 3.5.17 of 389-ds on debian stretch:

389-ds1.3.5.17-2

Ram doesn't seem to be the problem. Only 200 MB of 4GB is used.

The server is part of a replicated cluster. Other servers (running same
software version - more or less on the same virtualisation hardware) are
not affected.

But we got similar errors also some times in the past. But restarting
the service was always possible.

Any ideas?

Thanks and kind regards
Jan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] Re: Complex MMR scenarios

2020-10-05 Thread thierry bordaz



On 10/2/20 12:11 AM, William Brown wrote:



On 1 Oct 2020, at 20:27, Eugen Lamers  wrote:

Hi,
we want to setup a Multi Master Replication that represents a scenario with several 
mobile environments which need to replicate with some immobile server from time to time. 
Is it possible - and reasonable - to group the servers of a mobile environment together 
to a kind of sub-level MMR which replicates with the higher level MMR of the immobile 
environment. This replication between the different "levels" would be triggered 
somehow externally, because there would not always be a (sufficient) connection between 
them.
This would represent some kind of combination of MMR and cascading replication. 
Is there someone with experience with such kind of scenarios?


I haven't heard of such a scenario but I'd ask "what are you trying to achieve" 
rather than commenting on the design too much.

An early issue you will hit is that replication is *push* based, so the "immobile" servers need to 
be continually updated to know where the "mobile" server is in order to know how to contact it. 
It's not "pull" based where the moving server could always access the static server.

Additionally, because it's "push" based, the sending server sets the schedule of when to replicate. Certainly, there is 
a window of validity where a server can be "caugh up" (the changelog max age parameter is how long a server can be 
disconnected and still replicated to later to "update" it). Again, if this were "pull" based, the mobile 
server could "choose" when to recieve it's update.


But saying this, I think you have a problem space in mind, and while this may 
be a solution, knowing more about the challenge you want to solve may help us 
give better advice about how to configure your topology and potentially the 
integrating applications.

Thanks,


My understanding is that some hosts may get temporary offline (mobile) 
while others are always online (immobile). Replication can manage with 
hosts being online-offline. With limitations how long the hosts are 
offline (by default a host should not be offline more than 7 days) and 
on the update rate if a host an not the capacity (#received updates) to 
catch up.


regards
theirry

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] Re: Changing the name of a DS-389 attribute or adding a new field

2020-08-06 Thread thierry bordaz

Hi,

EmployeeID looks to be a direct mapping of EmployeeNumber. 
EmployeeNumber is defined in rfc2798 and delivered as a standard 
definition in /share/dirsrv/schema/06inetorgperson.ldif. Even if 
defining EmployeeId as alias of EmployeeNumber is possible I would not 
recommend to update a standard definition. Instead you may try to add 
EmployeeID in the instance specific custom definitions 
"/etc/dirsrv/slapd-/schema/99user.ldif". I think those changes 
should be done without the console.


If employeeiD is identical to employeeNumber and the users are already 
provisionned, I am afraid the easier way is to alias the standard 
definition.
Else you could update those entries adding employeeid based on 
employeeNumber value.
It also exists the possibility to make it a virtual attribute. Using a 
combinaison of managed entry [1] and indirect cos [2]. Or to use 
rewriters (filter rewrite and computed attributes).


[1] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/using-managed-entries
[2] 
https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html/administration_guide/advanced_entry_management-assigning_class_of_service#About_CoS-How_an_Indirect_CoS_Works


regards
theirry

On 8/6/20 3:11 PM, Janet Houser wrote:

Hi Folks,

I'm working to set up a PingFederate server to communicate with Apps 
at a sister location.  I'm told that the software needs to send the 
"employeeID" in order to

authenticate with the offsite server.

Under the Directory Server --> Schema -->  Tab Attributes, DS-389 has 
the attribute "employeeNumber" which I can add to a user's LDAP 
information.   There doesn't seem to be
a way to change the name on this page, and when I tried adding a "User 
Defined Attribute", it wouldn't show up under "Advanced" for a user.


Is there a way to add this field to all users and change the name to 
"employeeID"?


I'm searching, but I haven't found a way to do this via the 389-console.

Thanks in advance!

j
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] Re: [EXTERNAL] Re: Re: Re: new server setup hanging

2020-06-05 Thread thierry bordaz

Hi,

Sorry to come late on this thread, my understanding is that your second 
server is looking like hanging. Is it consuming CPU ? does it accept new 
connections, new operations ? is it "hanging" because of bad response time ?


The server being idle, are you sure connections are reaching the server 
? Are you seeing any activity logged in access log ?


regards

thierry



On 6/1/20 5:03 AM, Crocker, Deborah wrote:
This runs about 1/4 the load of the ones that had an issue. I don't 
know if I can run this up to see it.


*From:* William Brown 
*Sent:* Sunday, May 31, 2020 9:27:58 PM
*To:* 389-users@lists.fedoraproject.org 
<389-users@lists.fedoraproject.org>

*Subject:* [EXTERNAL] [389-users] Re: Re: Re: new server setup hanging


> On 1 Jun 2020, at 11:43, Crocker, Deborah  wrote:
>
> Is this sufficient? Again,  this server has a light load and we 
don't think we saw the problem, although I do note that the CPU usage 
seems pretty high for such a light load.


All threads are idle except thread 1 that is checking if there are new 
connections. I don't see anything obviously wrong here ... :(



>
>
> Thread 26 (Thread 0x7f0600d32700 (LWP 11330)):
> #0  0x7f06420a09a3 in select ()
>    at ../sysdeps/unix/syscall-template.S:81
> #1  0x7f06452e0649 in DS_Sleep ()
>    at /usr/lib64/dirsrv/libslapd.so.0
> #2  0x7f063a136bf7 in deadlock_threadmain ()
>    at /usr/lib64/dirsrv/plugins/libback-ldbm.so
> #3  0x7f064305dc5b in _pt_root (arg=0x557b3b2f5b00)
>    at ../../../nspr/pr/src/pthreads/ptthread.c:201
> #4  0x7f06429fdea5 in start_thread (arg=0x7f0600d32700)
>    at pthread_create.c:307
> #5  0x7f06420a98dd in clone ()
>    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> Thread 25 (Thread 0x7f0600531700 (LWP 11331)):
> #0  0x7f06420a09a3 in select ()
>    at ../sysdeps/unix/syscall-template.S:81
> #1  0x7f06452e0649 in DS_Sleep ()
>    at /usr/lib64/dirsrv/libslapd.so.0
> #2  0x7f063a13a7c7 in checkpoint_threadmain ()
>    at /usr/lib64/dirsrv/plugins/libback-ldbm.so
> #3  0x7f064305dc5b in _pt_root (arg=0x557b3b2f59e0)
>    at ../../../nspr/pr/src/pthreads/ptthread.c:201
> #4  0x7f06429fdea5 in start_thread (arg=0x7f0600531700)
>    at pthread_create.c:307
> #5  0x7f06420a98dd in clone ()
>    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> Thread 24 (Thread 0x7f05ffd30700 (LWP 11332)):
> #0  0x7f06420a09a3 in select ()
>    at ../sysdeps/unix/syscall-template.S:81
> #1  0x7f06452e0649 in DS_Sleep ()
>    at /usr/lib64/dirsrv/libslapd.so.0
> #2  0x7f063a136e47 in trickle_threadmain ()
>    at /usr/lib64/dirsrv/plugins/libback-ldbm.so
> #3  0x7f064305dc5b in _pt_root (arg=0x557b3b2f5c20)
>    at ../../../nspr/pr/src/pthreads/ptthread.c:201
> #4  0x7f06429fdea5 in start_thread (arg=0x7f05ffd30700)
>    at pthread_create.c:307
> #5  0x7f06420a98dd in clone ()
> ---Type  to continue, or q  to quit---
>    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> Thread 23 (Thread 0x7f05ff52f700 (LWP 11333)):
> #0  0x7f06420a09a3 in select ()
>    at ../sysdeps/unix/syscall-template.S:81
> #1  0x7f06452e0649 in DS_Sleep ()
>    at /usr/lib64/dirsrv/libslapd.so.0
> #2  0x7f063a1319f7 in perf_threadmain ()
>    at /usr/lib64/dirsrv/plugins/libback-ldbm.so
> #3  0x7f064305dc5b in _pt_root (arg=0x557b3b2f5440)
>    at ../../../nspr/pr/src/pthreads/ptthread.c:201
> #4  0x7f06429fdea5 in start_thread (arg=0x7f05ff52f700)
>    at pthread_create.c:307
> #5  0x7f06420a98dd in clone ()
>    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> Thread 22 (Thread 0x7f05fed2e700 (LWP 11334)):
> #0  0x7f0642a01a35 in pthread_cond_wait@@GLIBC_2.3.2 ()
>    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
> #1  0x7f0643058270 in PR_WaitCondVar (cvar=0x557b3b55e900, 
timeout=4294967295) at ../../../nspr/pr/src/pthreads/ptsynch.c:385

> #2  0x7f06452ccf58 in slapi_wait_condvar ()
>    at /usr/lib64/dirsrv/libslapd.so.0
> #3  0x7f063abe515e in cos_cache_wait_on_change ()
>    at /usr/lib64/dirsrv/plugins/libcos-plugin.so
> #4  0x7f064305dc5b in _pt_root (arg=0x557b4040b680)
>    at ../../../nspr/pr/src/pthreads/ptthread.c:201
> #5  0x7f06429fdea5 in start_thread (arg=0x7f05fed2e700)
>    at pthread_create.c:307
> #6  0x7f06420a98dd in clone ()
>    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> Thread 21 (Thread 0x7f05fe52d700 (LWP 11335)):
> #0  0x7f0642a01a35 in pthread_cond_wait@@GLIBC_2.3.2 ()
>    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
> #1  0x7f0643058270 in PR_WaitCondVar (cvar=0x557b3b55ebc0, 
timeout=4294967295) at ../../../nspr/pr/src/pthreads/ptsynch.c:385

> #2  0x7f06452ccf58 in slapi_wait_condvar ()
>    at 

[389-users] Re: replication problems

2020-05-11 Thread thierry bordaz

Hi Alberto,

The upstream ticket is https://pagure.io/389-ds-base/issue/51082 with a 
pending PR under review.


best regards
thierry

On 5/11/20 4:48 PM, Alberto Viana wrote:

Hi Thierry,

So I think this is good news. Once I'm waiting for a fix and affects 
my production servers (high availability), can you post here the 
ticket so I can follow up the fix?


I will check out and read about the asan.

Thanks a lot.

Alberto Viana

On Mon, May 11, 2020 at 10:21 AM thierry bordaz <mailto:tbor...@redhat.com>> wrote:


Hi Alberto,

I think I reproduced the same crash locally:

(gdb) where
#0  __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7f4137c13972 in __GI_abort () at abort.c:100
#2  0x7f4137e6c241 in PR_Assert (
    s=0x7f4138437420 "(vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num))",
file=0x7f4138437400 "ldap/servers/slapd/valueset.c", ln=471)
at ../../.././nspr/pr/src/io/prlog.c:571
#3  0x7f41384079ce in slapi_valueset_done
(vs=0x7f4098016c18) at ldap/servers/slapd/valueset.c:471
#4  0x7f41384085fb in valueset_array_purge
(a=0x7f4098016be0, vs=0x7f4098016c18, csn=0x7f4098017570) at
ldap/servers/slapd/valueset.c:804
#5  0x7f4138408766 in valueset_purge (a=0x7f4098016be0,
vs=0x7f4098016c18, csn=0x7f4098017570) at
ldap/servers/slapd/valueset.c:834
#6  0x7f41383483ce in attr_purge_state_information
(entry=0x7f40980151b0, attr=0x7f4098016be0,
csnUpTo=0x7f4098017570)
    at ldap/servers/slapd/attr.c:739
#7  0x7f413836e410 in entry_purge_state_information
(e=0x7f40980151b0, csnUpTo=0x7f4098017570) at
ldap/servers/slapd/entrywsi.c:292
#8  0x7f4134f8dedb in purge_entry_state_information
(pb=0x7f4098000b60) at
ldap/servers/plugins/replication/repl5_plugins.c:558
#9  0x7f4134f8e283 in multimaster_bepreop_modify
(pb=0x7f4098000b60) at
ldap/servers/plugins/replication/repl5_plugins.c:700
#10 0x7f4134f8dfe3 in multimaster_mmr_preop
(pb=0x7f4098000b60, flags=451) at
ldap/servers/plugins/replication/repl5_plugins.c:588
#11 0x7f41383c12b5 in plugin_call_mmr_plugin_preop
(pb=0x7f4098000b60, e=0x0, flags=451) at
ldap/servers/slapd/plugin_mmr.c:39
#12 0x7f4135094600 in ldbm_back_modify (pb=0x7f4098000b60)
at ldap/servers/slapd/back-ldbm/ldbm_modify.c:635
#13 0x7f41383a1e3f in op_shared_modify (pb=0x7f4098000b60,
pw_change=0, old_pw=0x0) at ldap/servers/slapd/modify.c:1022
#14 0x7f41383a0343 in do_modify (pb=0x7f4098000b60) at
ldap/servers/slapd/modify.c:380
#15 0x00418c2b in connection_dispatch_operation
(conn=0x47eeb28, op=0x47a1750, pb=0x7f4098000b60) at
ldap/servers/slapd/connection.c:624
#16 0x0041ad0b in connection_threadmain () at
ldap/servers/slapd/connection.c:1753
#17 0x7f4137e85869 in _pt_root (arg=0x47c4880) at
../../.././nspr/pr/src/pthreads/ptthread.c:201
#18 0x7f4137e1a4c0 in start_thread (arg=)
at pthread_create.c:479
#19 0x7f4137ced133 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

I will make a test case and open a ticket for that. The problem
should be missed by asan because it is related to uninitialized
structure and not a use after free as it was initially looking like .

Many thanks for your help on this and continuous investigation. It
helped a lot.

Also, if you would like to produce a asan build it is described in
http://www.port389.org/docs/389ds/howto/howto-addresssanitizer.html

best regards
thierry

On 5/8/20 2:26 PM, Alberto Viana wrote:

William,

It's suppose to be production, but once it's not working (the
replication) I just left one 389 as main server, so I can do any
test as I want.

I have no idea how to do that, can you point me in the right
direction?

Thanks

Alberto Viana

On Thu, May 7, 2020 at 9:09 PM William Brown mailto:wbr...@suse.de>> wrote:


Is this a development/debug build? Do you have a reproducer?
It would be interesting to run this under ASAN ...

> On 7 May 2020, at 22:31, Alberto Viana
mailto:alberto...@gmail.com>> wrote:
>
> William,
>
> Here's:
> Assertion failure: (vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num)),
at ldap/s

[389-users] Re: replication problems

2020-05-11 Thread thierry bordaz

Hi Alberto,

I think I reproduced the same crash locally:

   (gdb) where
   #0  __GI_raise (sig=sig@entry=6) at
   ../sysdeps/unix/sysv/linux/raise.c:50
   #1  0x7f4137c13972 in __GI_abort () at abort.c:100
   #2  0x7f4137e6c241 in PR_Assert (
    s=0x7f4138437420 "(vs->sorted == NULL) || (vs->num <
   VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
   VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num))",
   file=0x7f4138437400 "ldap/servers/slapd/valueset.c", ln=471) at
   ../../.././nspr/pr/src/io/prlog.c:571
   #3  0x7f41384079ce in slapi_valueset_done (vs=0x7f4098016c18) at
   ldap/servers/slapd/valueset.c:471
   #4  0x7f41384085fb in valueset_array_purge (a=0x7f4098016be0,
   vs=0x7f4098016c18, csn=0x7f4098017570) at
   ldap/servers/slapd/valueset.c:804
   #5  0x7f4138408766 in valueset_purge (a=0x7f4098016be0,
   vs=0x7f4098016c18, csn=0x7f4098017570) at
   ldap/servers/slapd/valueset.c:834
   #6  0x7f41383483ce in attr_purge_state_information
   (entry=0x7f40980151b0, attr=0x7f4098016be0, csnUpTo=0x7f4098017570)
    at ldap/servers/slapd/attr.c:739
   #7  0x7f413836e410 in entry_purge_state_information
   (e=0x7f40980151b0, csnUpTo=0x7f4098017570) at
   ldap/servers/slapd/entrywsi.c:292
   #8  0x7f4134f8dedb in purge_entry_state_information
   (pb=0x7f4098000b60) at
   ldap/servers/plugins/replication/repl5_plugins.c:558
   #9  0x7f4134f8e283 in multimaster_bepreop_modify
   (pb=0x7f4098000b60) at
   ldap/servers/plugins/replication/repl5_plugins.c:700
   #10 0x7f4134f8dfe3 in multimaster_mmr_preop (pb=0x7f4098000b60,
   flags=451) at ldap/servers/plugins/replication/repl5_plugins.c:588
   #11 0x7f41383c12b5 in plugin_call_mmr_plugin_preop
   (pb=0x7f4098000b60, e=0x0, flags=451) at
   ldap/servers/slapd/plugin_mmr.c:39
   #12 0x7f4135094600 in ldbm_back_modify (pb=0x7f4098000b60) at
   ldap/servers/slapd/back-ldbm/ldbm_modify.c:635
   #13 0x7f41383a1e3f in op_shared_modify (pb=0x7f4098000b60,
   pw_change=0, old_pw=0x0) at ldap/servers/slapd/modify.c:1022
   #14 0x7f41383a0343 in do_modify (pb=0x7f4098000b60) at
   ldap/servers/slapd/modify.c:380
   #15 0x00418c2b in connection_dispatch_operation
   (conn=0x47eeb28, op=0x47a1750, pb=0x7f4098000b60) at
   ldap/servers/slapd/connection.c:624
   #16 0x0041ad0b in connection_threadmain () at
   ldap/servers/slapd/connection.c:1753
   #17 0x7f4137e85869 in _pt_root (arg=0x47c4880) at
   ../../.././nspr/pr/src/pthreads/ptthread.c:201
   #18 0x7f4137e1a4c0 in start_thread (arg=) at
   pthread_create.c:479
   #19 0x7f4137ced133 in clone () at
   ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

I will make a test case and open a ticket for that. The problem should 
be missed by asan because it is related to uninitialized structure and 
not a use after free as it was initially looking like .


Many thanks for your help on this and continuous investigation. It 
helped a lot.


Also, if you would like to produce a asan build it is described in 
http://www.port389.org/docs/389ds/howto/howto-addresssanitizer.html


best regards
thierry

On 5/8/20 2:26 PM, Alberto Viana wrote:

William,

It's suppose to be production, but once it's not working (the 
replication) I just left one 389 as main server, so I can do any test 
as I want.


I have no idea how to do that, can you point me in the right direction?

Thanks

Alberto Viana

On Thu, May 7, 2020 at 9:09 PM William Brown > wrote:



Is this a development/debug build? Do you have a reproducer? It
would be interesting to run this under ASAN ...

> On 7 May 2020, at 22:31, Alberto Viana mailto:alberto...@gmail.com>> wrote:
>
> William,
>
> Here's:
> Assertion failure: (vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num)), at
ldap/servers/slapd/valueset.c:471
> Thread 17 "ns-slapd" received signal SIGABRT, Aborted.
> [Switching to Thread 0x7fffbbfff700 (LWP 13431)]
> 0x7455399f in raise () from /lib64/libc.so.6
> (gdb) frame 3
> #3  0x77b71627 in slapi_valueset_done
(vs=0x7fffb0022aa8) at ldap/servers/slapd/valueset.c:471
> 471        PR_ASSERT((vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num)));
> (gdb) print vs->sorted@21
> $1 = {0x7fffb0023ad0, 0x7fffb0022b50, 0x4, 0x6c7e80, 0x0, 0x0,
0x0, 0x0, 0x0, 0x7fffb0023c00, 0x7fffb00247c0, 0x0, 0x0, 0x0,
0x25, 0x664f7265626d656d, 0x0, 0x0, 0x115, 0x0, 0x0}
>
> Thanks
>
> Alberto Viana
>
> On Wed, May 6, 2020 at 11:38 PM William Brown mailto:wbr...@suse.de>> wrote:
>
>
> > On 6 May 2020, at 22:40, Alberto Viana mailto:alberto...@gmail.com>> wrote:
> >
> > William,
> >
> > Here's:
> >
> > (gdb) frame 3
> 

[389-users] Re: DNA plugin not working

2020-04-16 Thread thierry bordaz

Hi James,

I would guess that the allocated range is exhausted, means next value 
reached maxValue.

Possibly part of the range was taken by an other replica.

You can try to get more details with

ldapmodify -D "cn=directory manager" -W
dn: cn=config
changetype: modify
replace: nsslapd-accesslog-level
nsslapd-acceslog-level: 260   (default level 256 plus 4 for internal 
operations)
-
replace: nsslapd-plugin-logging
nsslapd-plugin-logging: on

and lookup at the entry ldapsearch -D DM... -b "cn=UID numbers,cn=Distributed 
Numeric Assignment Plugin,cn=plugins,cn=config" -s base nscpentrywsi


best regards
thierry
On 4/13/20 8:41 PM, CHAMBERLAIN James wrote:

Hi Mark,

Thanks for getting back to me.  After adjusting nsslapd-errorlog-level, here’s 
what I’ve got.

# grep dna-plugin /var/log/dirsrv/slapd-example/errors
[13/Apr/2020:14:30:00.480608036 -0400] - DEBUG - dna-plugin - _dna_pre_op_add - 
dn does not match filter
[13/Apr/2020:14:30:00.486700059 -0400] - DEBUG - dna-plugin - _dna_pre_op_add - 
adding uidNumber to uid=testuser1,ou=People,dc=example,dc=com as -2
[13/Apr/2020:14:30:00.559245389 -0400] - DEBUG - dna-plugin - _dna_pre_op_add - 
retrieved value 0 ret 1
[13/Apr/2020:14:30:00.561303217 -0400] - ERR - dna-plugin - _dna_pre_op_add - 
Failed to allocate a new ID!! 2
[13/Apr/2020:14:30:00.571360868 -0400] - DEBUG - dna-plugin - dna_pre_op - 
Operation failure [1]

And here’s the DNA config:

dn: cn=UID numbers,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: extensibleObject
cn: UID numbers
dnaType: uidNumber
dnamaxvalue: 10
dnamagicregen: 0
dnafilter: (objectclass=posixAccount)
dnascope: dc=example,dc=com
dnanextvalue: 25000

dn: cn=GID numbers,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: extensibleObject
cn: GID numbers
dnaType: gidNumber
dnamaxvalue: 10
dnamagicregen: 0
dnafilter: (objectclass=posixGroup)
dnascope: dc=example,dc=com
dnanextvalue: 25000

Best regards,

James



On Apr 13, 2020, at 2:25 PM, Mark Reynolds  wrote:

Enabling plugin logging will provide a little more detail about what is going 
wrong:

ldapmodify -D "cn=directory manager" -W
dn: cn=config
changetype: modify
replace: nsslapd-errorlog-level
nsslapd-errorlog-level: 65536


After running the test you can disable the debug plugin logging by setting the 
log level to zero.

Then share what information is logging when you add a new user.   This is most 
likely a configuration error so hopefully we can find out what went wrong in 
your set up.  Can you also provide the DNA config entries?

Thanks,

Mark

On 4/13/20 1:50 PM, CHAMBERLAIN James wrote:

Hi all,

I’m trying to use the DNA plugin to add uidNumbers on posixAccounts.  
Everything worked fine in testing, but now that it’s in production I’m seeing 
the following error:

ERR - dna-plugin -_dna_pre_op_add - Failed to allocate a new ID!! 2

I’ve followed the advice in the knowledge base 
(https://access.redhat.com/solutions/875133), about adding an equality index 
with an nsMatchingRule of integerOrderingMatch, but have not seen any 
difference in the server’s behavior.  Any ideas what I should try next?

Thanks,

James
This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.
If you are not one of the named recipients or have received this email in error,
(i) you should not read, disclose, or copy it,
(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,
(iii) Dassault Systèmes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

Please be informed that your personal data are processed according to our data 
privacy policy as described on our website. Should you have any questions 
related to personal data protection, please contact 3DS Data Protection Officer 
at 3ds.compliance-priv...@3ds.com

For other languages, go to https://www.3ds.com/terms/email-disclaimer


___
389-users mailing list --
389-users@lists.fedoraproject.org

To unsubscribe send an email to
389-users-le...@lists.fedoraproject.org

Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/

List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines

List Archives:
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

--

389 Directory Server Development Team


This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systèmes does not accept or assume 

[389-users] Re: ACI limiting read to groups a user is member of

2020-02-17 Thread thierry bordaz



On 2/17/20 5:26 AM, Grant Byers wrote:

Got it..


(userattr = "uniqueMember#USERDN")
It allows  a member of a groupofUniqueName to read/search that group. If 
you are also supporting GroupofName groups you may want to add the bind 
rule (userasttr="member#userDN").
With this rule, targetfilter is useless and you may remove it as it is 
quite expensive to evaluate.


best regards
thierry



Thanks!

On 17/2/20 2:02 pm, Grant Byers wrote:

On 17/2/20 1:24 pm, William Brown wrote:

On 17 Feb 2020, at 12:19, Grant Byers  wrote:

Hi,

In an effort to tighten search and read permissions on our internal
directory server, we've limited accounts to read certain attributes of
"self". They have search on the entire tree, but otherwise no read
perms. This is all well and good for clients that utilize the memberOf
attribute of self, but not so good for applications that utilize
memberUid, or insist on searching for groupOfUniqueNames or
groupOfNames  then enumerating them programmatically to determine which
groups the user belongs to after binding as the user.

So. I've been reading docs and haven't been able to find anything, but I
was wanting to do something like this;


dn: ou=groups,dc=example,dc=com
aci: (targetattr = "*")
(targetfilter = "(&(objectClass=groupOfUniqueNames)(uniqueMember={{rdn
of self}})")
(version 3.0; acl "Allow authenticated users to read own group
membership"; allow (read,compare,search)
(userdn="ldap:///all;);)


where the target filter limits results to only those that match
uniqueMember={{rdn of self}}


Is this possible?

Yes, but I'd suggest you tighten it up a bit. targetattr = * is really 
dangerous, it really means everything, including internal system attributes.

You probably want "(targetattr = "objectClass || uniquemember || cn || memberUid || member || 
memberOf")(targetfilter = "")

There is a section in the redhat ds guide that may help a lot

https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_access_control

In general, keep your aci's as targeted and as specific as possible.

I'm very happy to review these further if you need :)

For sure, I'll definitely be restricting attributes (as I have done with
other targeted ACIs).  I'm working toward least privilege.


I've reviewed that doco multiple times now, but still don't see how this
is possible. I must be missing something! I assume it has to be
targetfilter. Am I to do something like this?


(targetfilter = 
"(&(objectClass=groupOfUniqueNames)(uniqueMember=ldap:///self?dn)")


Thanks,

Grant



Thanks,
Grant
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] Re: 389 replication issue

2015-12-17 Thread thierry bordaz

Hi Frank,

keep alive entry was introduced https://fedorahosted.org/389/ticket/48266
the ADD failed but does the entry exists on the re-initialized replica ?
It is looking like it was created during total init, so its replicaition 
(ADD) may fail because the entry already exists.


thanks
thierry

On 12/15/2015 08:31 PM, Frank Munsche wrote:

Hi Guys,

I got a replication issue with the 389 ds running at centos 6.7 and the
following 389 pkgs installed:


389-admin.x86_641.1.35-1.el6 @epel
389-admin-console.noarch   1.1.8-1.el6   @epel
389-adminutil.x86_64   1.1.19-1.el6 @epel
389-console.noarch  1.1.7-1.el6   @epel
389-ds-base.x86_64 1.2.11.15-65.el6_7   @updates
389-ds-base-libs.x86_64  1.2.11.15-65.el6_7   @updates
389-ds-console.noarch 1.2.6-1.el6@epel
389-dsgw.x86_64 1.1.11-1.el6   @epel


I'm running a multimaster configuration based on two directory servers (ds1,
ds2)

When the replication is initiated at ds1 (replication from ds1 to ds2,
nsds5BeginReplicaRefresh set to 'start') , I find these entries in the error
log of ds1:



[15/Dec/2015:19:10:11 +] NSMMReplicationPlugin - Beginning total update of
replica "agmt="cn=ds1TOds2" (ds2:389)".
[15/Dec/2015:19:10:11 +] NSMMReplicationPlugin - Need to create
replication keep alive entry 

Re: [389-users] Replication error after initializing consumer

2014-08-25 Thread thierry bordaz

Hello Shilen,

   I am able to reproduce on 1.2.11.30  with the following testcase.

   #!/bin/sh

   VAL=${1:-1}
   ldapmodify -h localhost -p 44389 -D cn=directory manager -w
   password -a -f /tmp/my.ldif 

   ldapmodify -h localhost -p 44389 -D cn=directory manager -w
   password  !EOF
   dn:
   
cn=meTo_localhost.localdomain:45389,cn=replica,cn=dc\=example\,dc\=com,cn=mapping
   tree,cn=config
   changetype: modify
   replace: nsds5BeginReplicaRefresh
   nsds5BeginReplicaRefresh: start


   dn: cn=added${VAL}_1,dc=example,dc=com
   changetype: add
   objectClass: top
   objectClass: person
   sn: added${VAL}
   cn: added${VAL}

   dn: cn=added${VAL}_2,dc=example,dc=com
   changetype: add
   objectClass: top
   objectClass: person
   sn: added${VAL}
   cn: added${VAL}

   dn: cn=added${VAL}_3,dc=example,dc=com
   changetype: add
   objectClass: top
   objectClass: person
   sn: added${VAL}
   cn: added${VAL}
   !EOF

   I can reproduce it since 1.2.11.12 and after. So I doubt it is a
   regression.

   I fail to reproduce it on master, so it seems it has been fixed
   somewhere but I failed to identify the bug number.

   The problem is trigger because  the consumer receives an entry it
   already has. It should skip the update (because it already knows it)
   AND update its RUV. Here it does not update the ruv so replication
   start and start again at the same point.
   Here we have a log (replication log on consumer) like:

   [25/Aug/2014:09:25:33 +0200] NSMMReplicationPlugin -
   ruv_add_csn_inprogress: successfully inserted csn
   53fae4e20001 into pending list
   [25/Aug/2014:09:25:33 +0200] e4-a0dabef1-abe99f7b - urp_add
   (cn=addedthierry1_1,dc=example,dc=com): an entry with this
   uniqueid already exists.
   [25/Aug/2014:09:25:33 +0200] NSMMReplicationPlugin - conn=14
   op=5 csn=53fae4e20001 process postop: canceling
   operation csn
   [25/Aug/2014:09:25:33 +0200] NSMMReplicationPlugin -
   csnplInsert: CSN Pending list content:
   [25/Aug/2014:09:25:34 +0200] NSMMReplicationPlugin -
   53fae4e40001, not committed

   Where the postop is canceling the operation. That is fine but IMHO
   it should also update the RUV.

   thanks
   thierry

On 08/22/2014 09:22 PM, Shilen Patel wrote:
I first noticed it in a suffix that had about 90K entries.  After 
that, I was reproducing it in a suffix with about 280 entries.


Thanks!

-- Shilen

From: thierry bordaz tbor...@redhat.com mailto:tbor...@redhat.com
Date: Friday, August 22, 2014 3:18 PM
To: Shilen Patel shi...@duke.edu mailto:shi...@duke.edu
Cc: mreyno...@redhat.com mailto:mreyno...@redhat.com 
mreyno...@redhat.com mailto:mreyno...@redhat.com, General 
discussion list for the 389 Directory server project. 
389-users@lists.fedoraproject.org 
mailto:389-users@lists.fedoraproject.org

Subject: Re: [389-users] Replication error after initializing consumer

How many entries are in the initialized suffix ?

thierry
On 08/22/2014 07:18 PM, Shilen Patel wrote:

I can reproduce it easily and consistently in my environment.
 First off, I'm running 1.2.11.30.  I didn't have this issue
until I upgraded to this version (or at least it never came up).
 And I have a couple of masters and a couple of consumers.  When
everything is in sync, I do the following:

1.  Init via the console from a master to a consumer on one of
the suffixes.
2.  While the init is still happening (i.e. it is somewhere in
the middle of it), I add an entry to the master.
3.  When the init is done, the newly added entry does exist on
the consumer, but the errors show up indicating that the master
is trying to send that ADD to the consumer.

Not sure if that helps?

Thanks!

-- Shilen


From: thierry bordaz tbor...@redhat.com mailto:tbor...@redhat.com
Date: Friday, August 22, 2014 1:10 PM
To: Shilen Patel shi...@duke.edu mailto:shi...@duke.edu
Cc: mreyno...@redhat.com mailto:mreyno...@redhat.com
mreyno...@redhat.com mailto:mreyno...@redhat.com, General
discussion list for the 389 Directory server project.
389-users@lists.fedoraproject.org
mailto:389-users@lists.fedoraproject.org
Subject: Re: [389-users] Replication error after initializing
consumer

Hello,

I am still fighting to reproduce this issue.
Have you a reproducible test case, so that I could
investigate locally ?

thanks
thierry

On 08/22/2014 06:47 PM, Shilen Patel wrote:

While the issue is happening, if I add another entry to the
same suffix, it does not replicate over it seems.  The same
error continues.

Thanks!

-- Shilen

From: thierry bordaz tbor...@redhat.com
mailto:tbor

  1   2   >