proxying read/write request to secounday ldap server

2013-07-08 Thread val john
Hi.. guys

We have primary ldap called  ,ldap.example.com , In that server  we have
following structure

ou=users,ou=ldap,ou=example,ou=com
ou=applications,ou=ldap,ou=example,ou=com

basically we need redirect  any read/write request that coming to the
server ldap.example.com 's   *ou=applications,ou=ldap,ou=example,ou=com*
to our second ldap server  called  ldap2.example.com ,

and also all the Read/Write request that comes to *
ou=users,ou=ldap,ou=example,ou=com*  OU should be served from our primary
ldap server  ,

is there any way to achieve that , Please advice

Thank You
John


Re: High load times with mdb

2013-07-08 Thread Bill MacAllister


--On June 27, 2013 1:40:16 AM -0700 Howard Chu h...@symas.com wrote:

 I tried a load on an ext4 system with options 'rw,noatime, user_xattr,
 barrier=1, data=writeback' and got a load time of 01h40m06s.  This is
 the best time I have gotten so far loading on ext4.
 
 Did you try commit=60 barrier=0 ?

Here are the details of a test using commit=60 barrier=0.

mkfs -t ext4 \
-O ^flex_bg ^huge_file ^uninit_bg ^dir_nlink ^extra_isize ^extent
mount -t ext4 -o rw,noatime,barrier=0,commit=60,data=writeback
Filesystem features: has_journal ext_attr resize_inode dir_index
 filetype needs_recovery sparse_super
 large_file
mount optiions: rw, noatime, user_xattr, commit=60, barrier=0,
data=writeback
elapsed 01h40m55s spd 211.0 k/s

 I ended up writing a script that creates an ext2 file systems, loads
 the backend, umounts the partition, converts it to ext4 journaling,
 and then mounts the partition again.  This will allow me to continue
 with the server rebuilds, but it is a pretty ugly hack.

I am now getting close to ext2 performance using ext3, but ext4 is
consistently too slow in all of my tests.  Here are the results of the
fastest ext3 and fastest ext4 tests.

  * mkfs -t ext3 -O has_journal
mount -t ext3 -o rw,noatime,data=writeback
Filesystem features: has_journal ext_attr resize_inode dir_index
 filetype needs_recovery sparse_super
 large_file
mount options: rw, noatime, errors=continue, user_xattr, acl,
   barrier=1, data=writeback
elapsed 22m03s spd 965.6 k/s

  * mkfs -t ext4 -O ^flex_bg ^huge_file ^uninit_bg ^dir_nlink ^extra_isize
mount -t ext4 -o rw,noatime,data=writeback
Filesystem features: has_journal ext_attr resize_inode dir_index
 filetype needs_recovery extent sparse_super
 large_file
mount options: rw, noatime, user_xattr, barrier=1, data=writeback
elapsed 01h32m19s spd 230.6 k/s

During a load the status display would stall periodically.  The worse the
load time the more frequently the display stalled and the longer it stalled
for.  I guessing that this is flushing data to the disk.  I am also
guessing that since mdb is using memory mapped files that some tuning of
memory management might help improve performance.  I am not familiar with
the tuning knobs there so any pointers would be appreciated.

Bill

-- 

Bill MacAllister
Infrastructure Delivery Group, Stanford University



slapd[16890]: main: TLS init def ctx failed: -1

2013-07-08 Thread Ulrich Windl
Hi!

I found out that slapd[16890]: main: TLS init def ctx failed: -1 is due to

[pid 16890] open(/etc/ssl/private/slapd.key, O_RDONLY) = -1 EACCES 
(Permission denied)

and I wonder whether it isn't possible pro privide a better error message like
slapd[16890]: main: TLS init failed to read /etc/ssl/private/slapd.key: 
Permission denied

Regards,
Ulrich





Mirror mode replication breaks at times.

2013-07-08 Thread Pradyumna
Hi,

I have configured mirror mode replication. It's 2 node. Everything works fine 
but if I don't work on the server or say 30/40 mins or so and then when I try 
to add or delete any users or groups it don't get replicated to the other node. 
Am not getting any error in the logs and if I restart the slapd service it's 
syncs again and giving expected results.  The same setup I have in the test 
environment and its works like a charm the only difference in this setup is 
that the 2 servers are hosted on 2 different DC geographically separated where 
as in test they are in same DC.

Am using the openldap version which comes by default with RHEL 6.3. If it would 
have been a version issue then I should have expected the same result in test 
as well? Please help.

Regards,
/Pradyumna
Sent from my iPhone



Re: What should I use as hostname for php to connect to openLDAP?

2013-07-08 Thread Dan White

On 07/07/13 17:04 -0400, Jason Huang wrote:

Hello - I am a newbie to openLDAP and wants to get some help here.

I've deployed apache on a EC2 server, obtain an elastic IP from amazon and
point xyz.com to this IP.

I've installed a service provider(simplesamlphp) with hostname sp.xyz.com,
pointing to this same IP.

I've also installed an identity provider (simplesamlphp) with hostname
idp.xyz.com, pointing to this same IP.

Now I am installing openLDAP on this same server. What hostname should I
use for php to connect to this ldap server? Should I use xyz.com or 
sp.xyz.com or idp.xyz.com, or something else like ldap.xyz.com? If I
am using ldap.xyz.com, where should I specify this hostname?


Consult the PHP documentation:

http://php.net/manual/en/function.ldap-bind.php
http://php.net/manual/en/function.ldap-sasl-bind.php

The hostname you use within PHP generally shouldn't matter, unless you're
performing a sasl bind and using a hostname-aware-mechanism, such as
GSSAPI.

--
Dan White



Re: Mirror mode replication breaks at times.

2013-07-08 Thread Quanah Gibson-Mount
--On Monday, July 08, 2013 9:47 PM +1000 Pradyumna neomatrix...@gmail.com 
wrote:



Am using the openldap version which comes by default with RHEL 6.3. If it
would have been a version issue then I should have expected the same
result in test as well? Please help.


The RHEL6 builds of OpenLDAP are ancient, and known to be problematic for a 
large number of reasons.  As is often noted, it is generally best to avoid 
distribution builds of OpenLDAP.  If you are using RHEL, I suggest looking 
at http://ltb-project.org/wiki/download#openldap


--Quanah

--

Quanah Gibson-Mount
Sr. Member of Technical Staff
Zimbra, Inc
A Division of VMware, Inc.

Zimbra ::  the leader in open source messaging and collaboration



Re: Mirror mode replication breaks at times.

2013-07-08 Thread Mark Cairney

Hi,

On 08/07/2013 12:47, Pradyumna wrote:

Hi,

I have configured mirror mode replication. It's 2 node. Everything works fine 
but if I don't work on the server or say 30/40 mins or so and then when I try 
to add or delete any users or groups it don't get replicated to the other node. 
Am not getting any error in the logs and if I restart the slapd service it's 
syncs again and giving expected results.  The same setup I have in the test 
environment and its works like a charm the only difference in this setup is 
that the 2 servers are hosted on 2 different DC geographically separated where 
as in test they are in same DC.
In addition to what Quanah has said about running the latest stable 
release (there was a number of bug fixes for OpenLDAP between now and v 
2.4.23) this sounds a bit like a clock syncing/drifting issue, 
particularly if you have 2 in close proximity that work fine but the 2 
that aren't don't.


Having been bitten by this myself in the past for MMR to be reliable and 
successful the clocks on the servers have to match up almost to the 
millisecond. I'd recommend using ntpd and syncing them all to a common 
NTP time source.


I have a line like this in my /etc/ntp.conf:

server my.ntp.servers.IP minpoll 4 maxpoll 6 prefer



Am using the openldap version which comes by default with RHEL 6.3. If it would 
have been a version issue then I should have expected the same result in test 
as well? Please help.


Kind regards,

Mark


--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



Fan-Out replication

2013-07-08 Thread espeake

Oracle talks about doing a fan-out replication that fits what we are
looking at doing.  What we want to do is once a day or once a week is to
replicate our production LDAP from our master to a master LDAP server at
our DR site.  That server would then push changes to the consumer LDAP
servers at our DR site.  We are trying to keep the sites basically mirrored
but I do not want changes made at the DR site to replicate back to the
production master.


Master LDAP
/\
/\
/\
/\
/\
DR Master LDAP\
/   \
/   \
/   \
/Production Consumers
/
/
DR Consumers

I hope that helps to visualize what we are trying to do.

Thank you,
Eric Speake
Web Systems Administrator
O'Reilly Auto Parts

This communication and any attachments are confidential, protected by 
Communications Privacy Act 18 USCS § 2510, solely for the use of the intended 
recipient, and may contain legally privileged material. If you are not the 
intended recipient, please return or destroy it immediately. Thank you.



Re: Mirror mode replication breaks at times.

2013-07-08 Thread pradyumna dash
Hi,

Thanks you so much.  Let me try the same.

Regards,
/Pradyumna



On Tue, Jul 9, 2013 at 12:48 AM, Mark Cairney mark.cair...@ed.ac.uk wrote:

 Hi,


 On 08/07/2013 12:47, Pradyumna wrote:

 Hi,

 I have configured mirror mode replication. It's 2 node. Everything works
 fine but if I don't work on the server or say 30/40 mins or so and then
 when I try to add or delete any users or groups it don't get replicated to
 the other node. Am not getting any error in the logs and if I restart the
 slapd service it's syncs again and giving expected results.  The same setup
 I have in the test environment and its works like a charm the only
 difference in this setup is that the 2 servers are hosted on 2 different DC
 geographically separated where as in test they are in same DC.

 In addition to what Quanah has said about running the latest stable
 release (there was a number of bug fixes for OpenLDAP between now and v
 2.4.23) this sounds a bit like a clock syncing/drifting issue, particularly
 if you have 2 in close proximity that work fine but the 2 that aren't don't.

 Having been bitten by this myself in the past for MMR to be reliable and
 successful the clocks on the servers have to match up almost to the
 millisecond. I'd recommend using ntpd and syncing them all to a common NTP
 time source.

 I have a line like this in my /etc/ntp.conf:

 server my.ntp.servers.IP minpoll 4 maxpoll 6 prefer



  Am using the openldap version which comes by default with RHEL 6.3. If it
 would have been a version issue then I should have expected the same result
 in test as well? Please help.

  Kind regards,

 Mark


 --
 The University of Edinburgh is a charitable body, registered in
 Scotland, with registration number SC005336.