Hi!

I don't know the current versions and theit improvements, but the usual 
solutions would be these:
1) Delay the error after repeated auth failure, maybe exponentially (con: 
connections may increase)
2) Automatically blacklist a user after a number of auth failures (con: a valid 
user may be locked out by attacker)
3) Automatically blacklist a host that is causing repeated auth failures (con: 
other users from the same host may be locked out)
4) Temporarily disable a combination of host/user after repeated auth failures 
(there should be a mechanism to reset)

Such blacklists could be stored in LDAP, naturally...

Regards,
Ulrich

>>> Cyril Grosjean <[email protected]> schrieb am 11.02.2014 um 19:59 in 
>>> Nachricht
<[email protected]>:

> I use a couple of OpenLDAP 2.4.36 servers in a multi-master replication 
> setup.
> Write operations are sent to a single server, and then replicated to the 
> second one.
> 
> I sometimes have write operations "peaks" of about 900 operations 
> (modifications of the pwdFailureTime attribute mainly) per hour.
> The number of bind failures per user is neither limited nor reset yet and I 
> especially noticed a script that connects to the directory with the
> same service account and (wrong) password. So, until this script is 
> modified with the right password (which will take time, unfortunately),
> it can generate tons of failures, and thus tons of replications.
> 
> I noticed a several minutes replication delay between the directories, at 
> peak time, when comparing the contextCSN attributes.
> It looks to me a big delay with regards to the number of modifications. 
> Anything I could do to limit that delay ?




Reply via email to