Disable or remove Bind DN syntax check

2020-11-03 Thread Giuseppe

Hi to all,

for my companty I'm triing to setup a LDAP proxy to our Active Direcory 
implementation, after some time I have found several problems on some critical 
application that does not support multiple OU anche CN formed by "Surname Name" 
caused by the bad structure and nomenclature on the AD, but we cant change it.
To work around the problem I have used the rwm module to rewrite the client 
binddn query part to AD format name.surname@domain, but the proxy return:

[root@client ~]# ldapsearch -H ldap://192.168.29.134 ??-D 
"CN=Name.Surname,OU=subou,OU=Users HOUSE,DC=domain,DC=int" -W
ldap_bind: Invalid syntax (21)
?? ?? ?? ?? additional info: bindDN massage error
 ?? ??
some logs:

Nov ??3 21:32:33 proxy slapd[1309]: conn=1001 op=0 do_bind
Nov ??3 21:32:33 proxy slapd[1309]: >>> dnPrettyNormal: 

Nov ??3 21:32:33 proxy slapd[1309]: <<< dnPrettyNormal: 
, 

Nov ??3 21:32:33 proxy slapd[1309]: conn=1001 op=0 BIND 
dn="cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int" method=128
Nov ??3 21:32:33 proxy slapd[1309]: do_bind: version=3 
dn="cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int" method=128
Nov ??3 21:32:33 proxy slapd[1309]: daemon: activity on 1 descriptor
Nov ??3 21:32:33 proxy slapd[1309]: daemon: activity on:
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_context_apply [depth=1] 
string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int'
Nov ??3 21:32:33 proxy slapd[1309]:
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_rule_apply 
rule='^([C,c][N,n]=)([^.]*)\.([^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)$'
 string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int' [1 pass(es)]
Nov ??3 21:32:33 proxy slapd[1309]: daemon: epoll: listen=7 active_threads=0 
tvp=NULL
Nov ??3 21:32:33 proxy slapd[1309]: daemon: epoll: listen=8 active_threads=0 
tvp=NULL
Nov ??3 21:32:33 proxy slapd[1309]: daemon: epoll: listen=9 active_threads=0 
tvp=NULL
Nov ??3 21:32:33 proxy slapd[1309]: daemon: epoll: listen=10 active_threads=0 
tvp=NULL
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_rule_apply 
rule='^([C,c][N,n]=)([^.]*)\.([^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)$'
 string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int' [1 pass(es)]
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_rule_apply 
rule='^([C,c][N,n]=)([^.]*)\.([^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)$'
 string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int' [1 pass(es)]
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_rule_apply 
rule='^([C,c][N,n]=)([^.]*)\.([^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)$'
 string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int' [1 pass(es)]
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_rule_apply 
rule='^([C,c][N,n]=)([^.]*)\.([^.]*)(,[O,o][U,u][^.]*)(,[O,o][U,u][^.]*)$' 
string='cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int' [1 pass(es)]
Nov ??3 21:32:33 proxy slapd[1309]: ==> rewrite_context_apply [depth=1] 
res={0,'name.surn...@domain.int'}
Nov ??3 21:32:33 proxy slapd[1309]: [rw] bindDN: 
"cn=Name.Surname,ou=subou,ou=Users HOUSE,dc=domain,dc=int" -> 
"name.surn...@domain.int"
Nov ??3 21:32:33 proxy slapd[1309]: >>> dnPrettyNormal: 

Nov ??3 21:32:33 proxy slapd[1309]: send_ldap_result: conn=1001 op=0 p=3
Nov ??3 21:32:33 proxy slapd[1309]: send_ldap_result: err=21 matched="" 
text="bindDN massage error"
Nov ??3 21:32:33 proxy slapd[1309]: send_ldap_response: msgid=1 tag=97 err=21
Nov ??3 21:32:33 proxy slapd[1309]: conn=1001 op=0 RESULT tag=97 err=21 
text=bindDN massage error


I have downloaded the source code for try to remove or skip this check, but 
with my few programming skills after a month I haven't find the solution.
So there is a way (or a better way) to accomplish this need?

Best regards,
Giuseppe.

Config file of my test env:

### Schema includes ###
#include ?? ?? ?? ?? /etc/ldap/schema/corba.schema
#include ?? ?? ?? ?? /etc/ldap/schema/core.schema
#include ?? ?? ?? ?? /etc/ldap/schema/cosine.schema
#include ?? ?? ?? ?? /etc/ldap/schema/duaconf.schema
#include ?? ?? ?? ?? /etc/ldap/schema/dyngroup.schema
#include ?? ?? ?? ?? /etc/ldap/schema/inetorgperson.schema
#include ?? ?? ?? ?? /etc/ldap/schema/java.schema
#include ?? ?? ?? ?? /etc/ldap/schema/misc.schema
#include ?? ?? ?? ?? /etc/ldap/schema/nis.schema
#include ?? ?? ?? ?? /etc/ldap/schema/openldap.schema
#include ?? ?? ?? ?? /etc/ldap/schema/ppolicy.schema
#include ?? ?? ?? ?? /etc/ldap/schema/collective.schema
#include ?? ?? ?? ?? /etc/openldap/schema/ad.schema


include ?? ?? ?? ?? /etc/openldap/schema/corba.schema
include ?? ?? ?? ?? /etc/openldap/schema/core.schema
include ?? ?? ?? ?? /etc/openldap/schema/cosine.schema
#include ?? ?? ?? ?? /etc/ldap/schema/duaconf.schema
#include ?? ?? ?? ?? 

Re: HDB to MDB migration results in higher CPU usage on openldap consumers

2020-11-03 Thread paul . jc
Quanah Gibson-Mount wrote:
> If you're using aliases in your LDAP DB, then yes, that'll absolutely 
> trigger issues such as this.  The use of aliases generally indicates poor 
> DIT design. ;)

Hey Quanah, understood. :) I inherited this openldap database and I'm not well 
versed in aliases. How do I verify if I actually have aliases being utilized?  
I have no ldif files in my core or custom schema config that define aliases.   
An ldapsearch on the cn=config returns default references, but that is all: 

olcAttributeTypes: ( 2.5.4.1 NAME ( 'aliasedObjectName' 'aliasedEntryName' ) D
 ESC 'RFC4512: name of aliased object' EQUALITY distinguishedNameMatch SYNTAX 
 1.3.6.1.4.1.1466.115.121.1.12 SINGLE-VALUE )

and

olcObjectClasses: ( 2.5.6.1 NAME 'alias' DESC 'RFC4512: an alias' SUP top STRU
 CTURAL MUST aliasedObjectName )

Is there something else I should search to verify usage of aliases in the DB?  

Thanks.  
Paul


Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Quanah Gibson-Mount




--On Tuesday, November 3, 2020 6:41 PM +0100 Simone Piccardi 
 wrote:




The problem manifests itself without periodicity and looking on the
number of connection before it we could not see any usage peak. We tried
to strace slapd threads during the problem, and they seem blocked on a
mutex waiting for the one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.


If you can attach to the process while this is occurring, I'd suggest 
obtaining a full GDB backtrace to see what the different slapd threads are 
doing at that time.  Also, what mutex specifically is slapd waiting on?



From the behaviour I was suspecting (just a wild and uninformated guess)
some indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some
example as related to threads used for indexing, but the change has no
effect. Re-reading last version of man-page, if I understand it
correctly, it's effective only for slapadd etc.


Correct, this has setting has zero to do with a running slapd process.  It 
only affects how many threads are used by slapadd & slapindex while doing 
indexing during offline operations.  Additionally any value above 2 has no 
impact with back-mdb (it'll just be set back to 2).



So a first question is: there is any other configuration parameter about
indexing that I can try?


If you really believe that this is indexing related, you should be able to 
tell this from the slapd logs at "stats" logging, where you would see a 
specific search taking a significant amount of time.  However that 
generally does not lead to a system that's paused as searches shouldn't 
trigger a mutex issue like what you're describing.


Is this on RHEL7 or later?  If you have both "stats" and "sync" logging 
enabled (the recommended setting for replicating nodes), what does the 
slapd log show is happening at this time?


Regards,
Quanah

--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:



Re: ppolicy issues

2020-11-03 Thread Quanah Gibson-Mount




--On Tuesday, November 3, 2020 5:30 PM +0100 Kresimir Petkovic 
 wrote:




password-hash {CLEARTEXT}


As documented in slapd.conf(5), this is a GLOBAL configuration option that 
applies to all databases.  You'd need to set up two different slapd 
instances for this case.


Regards,
Quanah

--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:



Re: HDB to MDB migration results in higher CPU usage on openldap consumers

2020-11-03 Thread Quanah Gibson-Mount




--On Tuesday, November 3, 2020 6:51 PM + paul...@yahoo.com wrote:


Quanah Gibson-Mount wrote:

If you're using aliases in your LDAP DB, then yes, that'll absolutely
trigger issues such as this.  The use of aliases generally indicates
poor  DIT design. ;)


Hey Quanah, understood. :) I inherited this openldap database and I'm not
well versed in aliases. How do I verify if I actually have aliases being
utilized?  I have no ldif files in my core or custom schema config that
define aliases.An ldapsearch on the cn=config returns default
references, but that is all:


Hi Peter,

You would need to search your back-mdb database and see if there are any 
objects with an objectClass of "alias".  I.e.,


ldapsearch ... "(objectClass=alias") 1.1

filling in your bind details of course (I'd suggest something with full 
read access to the entire db).


Regards,
Quanah


--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:



ppolicy issues

2020-11-03 Thread Kresimir Petkovic

Hi guys,

I'm having issues trying to setup multiple databases with different 
password hash algos.


My first db has to have plaintext passwords and I'm using

password-hash {CLEARTEXT}
overlay ppolicy
ppolicy_hash_cleartext

and my second one needs to use SHA for password hash. I have it like 
this in slapd.conf


password-hash {SHA}
overlay ppolicy
ppolicy_hash_cleartext

When I insert user in ldap via ldapadd it stores plaintext password for 
that user in userPassword attribute.


Can I have different password-hash directives for each database? Or my 
ppolicy overlay doesn't work.



Thanks in advance.


BR,

Kreso


RE: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Maucci, Cyrille
If I was facing this symptom, I'd capture a couple of pstack  
outputs when the pb is occurring (and maybe in correlation with perf top -p 
 if pstacks are not enough).
That should help avoiding guesses.

++Cyrille

-Original Message-
From: Simone Piccardi [mailto:picca...@truelite.it] 
Sent: Tuesday, November 3, 2020 6:41 PM
To: openldap-technical@openldap.org
Subject: Connections blocked for some tens of seconds while a single slapd 
thread running 100%

Hi,

we got a quite strange behaviour in which a slapd server stops processing 
connections for some tens of seconds while a single thread is running 100% on a 
single CPU and all other CPU are almost idle.
When the problem arise there is no significant iowait or disk I/O (and no 
swapping, that's disabled). Context switches just go near zero (from some tens 
of thousand to some hundreds). Load average is almost always under 2.

The server has 32G of RAM and 4 HT processors, is running
openldap-2.4.54 in mirror mode (but no delta replication) using the mdb 
backend. The same behaviour was found also with 2.4.53. OpenLDAP is the only 
service running on it, apart SSH and some monitoring tools.
Database maxsize is 25G around 17G are used.

I'm attaching a redacted configuration of the main server (the secondary one is 
the same, with IDs reverted for mirror mode use)

Most of the time it works just fine, processing a up to a few thousand of read 
query per second while having some tens of write per second.
Connections are managed by HA-proxy, sending them to this server by default 
(used as main node). Many times these stop are short (around 10
second) and we don't lost connections, but when the problem arise and last for 
enough time, HAproxy switch to the second node, and we got downtimes. Staying 
with the secondary node we have the same behaviour.

The problem manifests itself without periodicity and looking on the number of 
connection before it we could not see any usage peak. We tried to strace slapd 
threads during the problem, and they seem blocked on a mutex waiting for the 
one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.

>From the behaviour I was suspecting (just a wild and uninformated guess) some 
>indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some example 
as related to threads used for indexing, but the change has no effect. 
Re-reading last version of man-page, if I understand it correctly, it's 
effective only for slapadd etc.

So a first question is: there is any other configuration parameter about 
indexing that I can try?

Anyway I'm not sure if there is an effective indexing issue (indexes are quite 
basic). I was suspecting this because there are lot of writes, and there is no 
strace activity during the stop.  I should look somewhere else?

Any suggestion on further checks or configuration changes will be more than 
appreciated.

Regards
Simone


Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Simone Piccardi
Hi,

we got a quite strange behaviour in which a slapd server stops
processing connections for some tens of seconds while a single thread is
running 100% on a single CPU and all other CPU are almost idle.
When the problem arise there is no significant iowait or disk I/O (and
no swapping, that's disabled). Context switches just go near zero (from
some tens of thousand to some hundreds). Load average is almost always
under 2.

The server has 32G of RAM and 4 HT processors, is running
openldap-2.4.54 in mirror mode (but no delta replication) using the mdb
backend. The same behaviour was found also with 2.4.53. OpenLDAP is the
only service running on it, apart SSH and some monitoring tools.
Database maxsize is 25G around 17G are used.

I'm attaching a redacted configuration of the main server (the secondary
one is the same, with IDs reverted for mirror mode use)

Most of the time it works just fine, processing a up to a few thousand
of read query per second while having some tens of write per second.
Connections are managed by HA-proxy, sending them to this server by
default (used as main node). Many times these stop are short (around 10
second) and we don't lost connections, but when the problem arise and
last for enough time, HAproxy switch to the second node, and we got
downtimes. Staying with the secondary node we have the same behaviour.

The problem manifests itself without periodicity and looking on the
number of connection before it we could not see any usage peak. We tried
to strace slapd threads during the problem, and they seem blocked on a
mutex waiting for the one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.

>From the behaviour I was suspecting (just a wild and uninformated guess)
some indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some
example as related to threads used for indexing, but the change has no
effect. Re-reading last version of man-page, if I understand it
correctly, it's effective only for slapadd etc.

So a first question is: there is any other configuration parameter about
indexing that I can try?

Anyway I'm not sure if there is an effective indexing issue (indexes are
quite basic). I was suspecting this because there are lot of writes, and
there is no strace activity during the stop.  I should look somewhere else?

Any suggestion on further checks or configuration changes will be more
than appreciated.

Regards
Simone

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#

include /usr/local/openldap/etc/openldap/schema/corba.schema
include /usr/local/openldap/etc/openldap/schema/core.schema
include /usr/local/openldap/etc/openldap/schema/cosine.schema
include /usr/local/openldap/etc/openldap/schema/duaconf.schema
include /usr/local/openldap/etc/openldap/schema/dyngroup.schema
include /usr/local/openldap/etc/openldap/schema/inetorgperson.schema
include /usr/local/openldap/etc/openldap/schema/java.schema
include /usr/local/openldap/etc/openldap/schema/misc.schema
include /usr/local/openldap/etc/openldap/schema/nis.schema
include /usr/local/openldap/etc/openldap/schema/openldap.schema
include /usr/local/openldap/etc/openldap/schema/ppolicy.schema
include /usr/local/openldap/etc/openldap/schema/collective.schema

#add OurOrganization schema
include /usr/local/openldap/etc/openldap/schema/OurOrganization.schema

# Allow LDAPv2 client connections.  This is NOT the default.
allow bind_v2

# This is for mirrormode replication
serverID 11

# Global ACLs
include /usr/local/openldap/etc/openldap/acls/global.acl

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral   ldap://root.openldap.org

pidfile  /usr/local/openldap/var/run/slapd.pid
argsfile /usr/local/openldap/var/run/slapd.args

# options: none sync parse shell stats2 stats ACL config filter BER conns args 
packets trace any
# https://www.openldap.org/doc/admin24/slapdconfig.html
#loglevel none
#loglevel stats sync
loglevel stats
#loglevel none
#loglevel any


# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by running
# /usr/libexec/openldap/generate-server-cert.sh. Your client software may balk
# at self-signed certificates, however.
TLSCACertificatePath /usr/local/openldap/etc/openldap/certs
TLSCACertificateFile /usr/local/openldap/etc/openldap/certs/rootCA.pem
TLSCertificateFile /usr/local/openldap/etc/openldap/certs/server.crt
TLSCertificateKeyFile /usr/local/openldap/etc/openldap/certs/server.key


#TLSCertificateFile /etc/pki/tls/certs/ldap1_pubkey.pem
#TLSCertificateKeyFile /etc/pki/tls/certs/ldap1_privkey.pem

sizelimit 25

# Setup the idle timeout to prevent app servers from taking down ldap.
#