Re: [CentOS] openldap mmr + heartbeat hot standby
Hi Benjamin, Tait, Thanks for the advice, setting up heartbeat to look for an IP was easy, monitoring looks a bit more complex so i'll have to dive into that. at least now i know the right direction to look for, Thanks, Wessel On 05/28/2012 09:01 PM, Tait Clarridge wrote: Thanks Mark, that does make it more clear, i've made a setup and heartbeat does that by default, when heartbeat shuts down it's stops slapd as well and assignes the ip to machine2 and starts slapd there , what i want is that it already has slapd running on the failover but still checks that service for availability. that way i won't have an outdated database on the failover ldap server. would you know if there is a way of making heartbeat not sending the stop command to a particular resource or do i need write a script to (not) do this? i wouldn't mind a script but if that function is already there i'd rather use that one. Thanks, Wessel Wessel, Just pass heartbeat an IP, not a service. If you use IPaddr2 it will also send a gratuitous ARP that will cut down the failover time. eg. primarynode.mycompany.com IPaddr2::10.10.10.50/24/eth0 All the services will stay running, if you want to do service checks to watch slapd to see if it breaks you can use the mon project to kickoff a heartbeat failover. -Tait ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] openldap mmr + heartbeat hot standby
On 05/23/2012 03:39 PM, m.r...@5-cent.us wrote: Wessel van der Aart wrote: Hi List, I've setup 2 openldap servers in n-way multimaster replication mode in a test environment, both run centos 6.2, this works well but now i'm trying to make these 2 servers failover using heartbeat. i've got no experience with heartbeat (or setting up clusters in general) however from what i understand heartbeat starts/stops the service if the server has the virtual IP assigned or not. this would be fine for httpd , but since replication doesn't work when slapd is stopped i was wondering if anyone knows if there is a way to setup heartbeat with a hot standby? so that the service keeps running but the ip does gets reassigned when one goes down. A slight clarification: what happens on a failover cluster is that you've got heartbeat running, and each machine looks to see if the other's still alive. At this point, one is live, and the other's standby. If/when the standby notices it cannot see - even a ping - the other address, or it can be configured to look for a service, such as doing a default search (for apache, that might be a wget ImAlive.html) - it tells the system to assert the IP, and turns up all services for which it's been configured. I haven't done it, but I'd say you could easily configure heartbeat to check, and if the IP's visible, but the service times out, to tell the live one to turn down services, and to take over primaryhood. Hope that's clearer. mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos Thanks Mark, that does make it more clear, i've made a setup and heartbeat does that by default, when heartbeat shuts down it's stops slapd as well and assignes the ip to machine2 and starts slapd there , what i want is that it already has slapd running on the failover but still checks that service for availability. that way i won't have an outdated database on the failover ldap server. would you know if there is a way of making heartbeat not sending the stop command to a particular resource or do i need write a script to (not) do this? i wouldn't mind a script but if that function is already there i'd rather use that one. Thanks, Wessel ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] openldap mmr + heartbeat hot standby
Hi List, I've setup 2 openldap servers in n-way multimaster replication mode in a test environment, both run centos 6.2, this works well but now i'm trying to make these 2 servers failover using heartbeat. i've got no experience with heartbeat (or setting up clusters in general) however from what i understand heartbeat starts/stops the service if the server has the virtual IP assigned or not. this would be fine for httpd , but since replication doesn't work when slapd is stopped i was wondering if anyone knows if there is a way to setup heartbeat with a hot standby? so that the service keeps running but the ip does gets reassigned when one goes down. Thanks, Wessel ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] hfs with extended attribute support
i figured that if you use filesystems and protocols most native to the mac os you´ll get the best results in stability on the client side, that´s why i thought of HFS. but ext4 seems to do the job well. i´ll definitely checkout samba too. do you also serve homedirs to them? had any issues? Thanks, Wessel On 03/08/2012 06:07 PM, Lamar Owen wrote: Sorry it didn't work out for you. Linus, for one, has a pretty poor opinion of HFS in general.and I'm not thrilled with it myself, due to some issues I had with Tiger on a PowerMac G4 and heavily corrupted filesystems, journaled or not. And I have some of the 'rescue' tools like DiskWarrior, and I've still lost some data. Hopefully your experience with ext4 will work out better. Mac OS X does very well with SMB/CIFS shares, too, if AppleTalk doesn't work out for you. (I run Mac OS X here in a few areas, and even Tiger works well with a Samba server, but I haven't tried any ACL's with it). ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] hfs with extended attribute support
Hi Lamar, i tried their free version today. at first it did look promising but as soon i was to perform actions on files with acl's on them the whole system came down hard and leaving my external HDD corrupted. after several hours i've decided to give up and go with ext4 but still thanks! Wessel On 03/07/2012 07:40 PM, Lamar Owen wrote: On Wednesday, March 07, 2012 01:17:15 PM Wessel van der Aart wrote: so i add user_xattr and acl to my fstab options but then it fails to mount. checking the error in dmesg just gives me ¨hfs: unable to parse mount options¨. does anyone know what´s going on and what i should do to make this work? Well, having used the in-kernel HFS+ filesystem driver before, and found it lacking in a number of areas (like massive corruption under heavy load or when unlinking lots of files) I bought the commercially supported Paragon NTFSHFS drivers. http://www.paragon-software.com/business/ntfs-linux-professional/ I have not tried extended attribute and acl support, but the Paragon drivers support full read and write on journaled HFS+ filesystems. It's $40 US, but worth every penny in my book for filesystem compatibility. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] hfs with extended attribute support
Hi all, I´ve got a HFS+(not journaled) volume connected to my centos6.2 test server, i installed the kmod-hfs(plus) packages and read/write works all fine. but since i´m going to use this for serving mac home folders via netatalk i would like to mount it with support for Extended Attributes and acl´s. so i add user_xattr and acl to my fstab options but then it fails to mount. checking the error in dmesg just gives me ¨hfs: unable to parse mount options¨. does anyone know what´s going on and what i should do to make this work? regards, Wessel ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] openldap missing modules
thanks for the tip, does this dynamic configuration come with openldap 2.4? the version they use in the book is 2.3 which is also the version on centos 5.7 so i guess i'm safe there , but now i'm wondering if this isn't too outdated. does it make's sense to start with learning an older version? i'm basically just looking for a way to familiarise myself with all the terms and tools as i'm fairly new to all this ( i only have experience with apple's open-directory). what do you think? wessel On 10/27/2011 05:28 PM, Craig White wrote: Ubuntu has been using 'dynamic' configuration (aka cn=config and /etc/ldap/slapd.d) for quite some time now but you're using CentOS 5.x which includes an old version of OpenLDAP and uses the 'flat file' configuration (/etc/openldap/slapd.conf) There's bound to be issues at each place where it talks about 'configuration'. My suggestion to you is to use some type of virtualization product (VMWare, VirtualBox, etc.) and install Ubuntu 10.04 LTS on a virtual and then you will track with the book. Craig On Oct 27, 2011, at 5:01 AM, Wessel van der Aart wrote: actually i'm reading this book , ' mastering openldap' from packt publishing, on it, the book uses ubuntu as distro in their examples and i just assumed the working of openldap between distro's wouldn't be any different (except for directory paths). however i removed the moduleload line , ran 'slaptest -v -u -f /etc/openldap/slapd.conf' (the 'database hdb' bit was already there) and now it's fine. Thanks, wessel On 10/26/2011 11:11 PM, Alexander Dalloz wrote: Hi, I assume you are following a random tutorial on the net. Don't do that. It simply does not fit. Instead of using a modulepath just (the proper one on CentOS would be /usr/lib/openldap, as pre-defined in slapd.conf; but the backends are not available as modules on CentOS), define you database properly. Where you see databasebdb in the slapd.conf CentOS ships with, just change bdb into hdb. Alexander ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] openldap missing modules
actually i'm reading this book , ' mastering openldap' from packt publishing, on it, the book uses ubuntu as distro in their examples and i just assumed the working of openldap between distro's wouldn't be any different (except for directory paths). however i removed the moduleload line , ran 'slaptest -v -u -f /etc/openldap/slapd.conf' (the 'database hdb' bit was already there) and now it's fine. Thanks, wessel On 10/26/2011 11:11 PM, Alexander Dalloz wrote: Hi, I assume you are following a random tutorial on the net. Don't do that. It simply does not fit. Instead of using a modulepath just (the proper one on CentOS would be /usr/lib/openldap, as pre-defined in slapd.conf; but the backends are not available as modules on CentOS), define you database properly. Where you see databasebdb in the slapd.conf CentOS ships with, just change bdb into hdb. Alexander ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] openldap missing modules
Hi List, I'm currently setting up an openldap server and included the following lines in my slapd.conf : modulepath /usr/lib/ldap moduleload back_hdb after finishing up my config and i run slaptest on it i get an error saying that the modulepath doesn't exist. I checked and it indeed isn't there , in fact i can find it anywhere on my system (centos 5.7). the packages i've installed through yum are openldap openldap-servers and openldap-client. does anyone know where to find this folder? or do i have to install some package to get this module? Thanks, Wessel ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] bond empty after reboot
Hi all, I've setup a ethernet bond on my centos 5.6 server , when i do a reboot the bond does come up but cleared all the slaves and i've to manually re-add them with ifenslave. does anyone know a solution to this? am i missing something? offcourse i can add it to my rc.local but there must be a more elegant way. please see my configs below Thanks, Wessel ifcfg-bond0: DEVICE=bond0 IPADDR=xxx.xx.x.xx NETMASK=255.255.255.0 NETWORK=xxx.xx.x.xx BROADCAST=xxx.xx.x.xx GATEWAY= ONBOOT=yes BOOTPROTO=none USERCTL=no BONDING_MODULE_OPTS='mode=802.3ad miimon=80' TYPE=BOND ifcfg-eth0 (same for eth1,eth2 eth3): # Intel Corporation 82576 Gigabit Network Connection DEVICE=eth0 ONBOOT=yes BOOTPROTO=none USERCTL=no MASTER=bond0 SLAVE=YES TYPE=ethernet HWADDR=xx:xx:xx:xx:xx:xx /etc/modprobe.conf: alias eth0 igb alias eth1 igb alias eth2 igb alias eth3 igb alias eth4 bnx2 alias eth5 bnx2 alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 ata_piix alias scsi_hostadapter3 usb-storage alias net-pf-10 off alias ipv6 off options ipv6 disable=1 alias bond0 bonding options bond0 miimon=80 mode=4 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] connection speeds between nodes
thanks for all the response , really gives me a good idea where to pay attention to. the software we're using to distribute our renders is RoyalRender, i'm not sure if any optimization is possible, i'll check it out. so far it seems that the option of using nfs stands or falls with he use of sync. does anyone here uses nfs without sync in production? does data corrupt often? all the data send from the nodes can be reproduced , so i would think an error is acceptable if it happens once a month or so. are there any other options more suitable in this situation? i thought about GFS with iscsi but i'm not sure if that will work if the filesystem to be shared already exists in production. Thanks, Wessel On Tue, 8 Mar 2011 17:25:03 + (GMT), John Hodrien j.h.hodr...@leeds.ac.uk wrote: On Tue, 8 Mar 2011, Ross Walker wrote: Well on my local disk I don't cache the data of tens or hundreds of clients and a server can have a memory fault and oops just as easily as any client. Also I believe it doesn't sync every single write (unless mounted on the client sync which is only for special cases and not what I am talking about) only when the client issues a sync or when the file is closed. The client is free to use async io if it wants, but the server SHOULD respect the clients wishes for synchronous io. If you set the server 'async' then all io is async whether the client wants it or not. I think you're right that this is how it should work, I'm just not entirely sure that's actually generally the case (whether that's because typical applications try to do sync writes or if it's for other reasons, I don't know). Figures for just changing the server to sync, everything else identical. Client does not have 'sync' set as a mount option. Both attached to the same gigabit switch (so favouring sync as far as you reasonably could with gigabit): sync;time (dd if=/dev/zero of=testfile bs=1M count=1;sync) async: 78.8MB/sec sync: 65.4MB/sec That seems like a big enough performance hit to me to at least consider the merits of running async. That said, running dd with oflag=direct appears to bring the performance up to async levels: oflag=direct with sync nfs export: 81.5 MB/s oflag=direct with async nfs export: 87.4 MB/s But if you've not got control over how your application writes out to disk, that's no help. jh ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] connection speeds between nodes
Hi All, I've been asked to setup a 3d renderfarm at our office , at the start it will contain about 8 nodes but it should be build at growth. now the setup i had in mind is as following: All the data is already stored on a StorNext SAN filesystem (quantum ) this should be mounted on a centos server trough fiber optics , which in its turn shares the FS over NFS to all the rendernodes (also centos). Now we've estimated that the average file send to each node will be about 90MB , so that's what i like the average connection to be, i know that gigabit ethernet should be able to that (testing with iperf confirms that) but testing the speed to already existing nfs shares gives me a 55MB max. as i'm not familiar with network shares performance tweaking is was wondering if anybody here did and could give me some info on this? Also i thought on giving all the nodes 2x1Gb-eth ports and putting those in a BOND, will do this any good or do i have to take a look a the nfs server side first? thanks, Wessel ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos