Re: High Availability

2015-04-22 Thread Ciro Iriarte
El abr 22, 2015 12:51 AM, Bron Gondwana br...@fastmail.fm escribió: On Wed, Apr 22, 2015, at 02:27 PM, Ciro Iriarte wrote: Interesting, is the use of several instances needed because cyrus cannot scale with threads in a single instance scenario? There are two interesting reasons: 1) global

Re: High Availability

2015-04-22 Thread Bron Gondwana
On Wed, Apr 22, 2015, at 11:32 PM, Ciro Iriarte wrote: Hi Bron, it makes sense from that perspective although it seems to imply a management nightmare. Do you use any management/automation (webscale if you want) framework?. Less than you might imagine :) We have a single file

Re: High Availability

2015-04-21 Thread Ciro Iriarte
of storage each, replicating to different machines. I had only used a simple setup, one imap server with several spools That works fine, but it wont' give you high availability. Is there some more information somewhere? Not much unfortunately. We've written about our setup many times, most

Re: High Availability

2015-04-21 Thread Bron Gondwana
On Wed, Apr 22, 2015, at 02:27 PM, Ciro Iriarte wrote: Interesting, is the use of several instances needed because cyrus cannot scale with threads in a single instance scenario? There are two interesting reasons: 1) global locks. There are some - mailboxes.db for example. If you have

Re: High Availability

2015-04-20 Thread Bron Gondwana
, one imap server with several spools That works fine, but it wont' give you high availability. Is there some more information somewhere? Not much unfortunately. We've written about our setup many times, most recently here: http://blog.fastmail.com/2014/12/04/standalone-mail-servers/ and more

High Availability

2015-04-20 Thread Lalot Dominique
Hi, We used cyrus for many years and switch to a proprietary system. We are juste looking back to cyrus. I would like to know the status of cyrus and HA: This documentation seems to consider that replication is edge.. http://cyrusimap.org/docs/cyrus-imapd/2.4.9/install-replication.php and it has

Re: High Availability

2015-04-20 Thread Niels Dettenbach
Am Montag, 20. April 2015, 08:32:52 schrieb Lalot Dominique: I would like to know the status of cyrus and HA: This documentation seems to consider that replication is edge.. http://cyrusimap.org/docs/cyrus-imapd/2.4.9/install-replication.php and it has been written in 2007 Cyrus is a product

Re: High Availability

2015-04-20 Thread Bron Gondwana
On Mon, Apr 20, 2015, at 04:32 PM, Lalot Dominique wrote: Hi, We used cyrus for many years and switch to a proprietary system. We are juste looking back to cyrus. I would like to know the status of cyrus and HA: This documentation seems to consider that replication is edge..

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-29 Thread Andrew Morgan
On Wed, 29 Sep 2010, Tomasz Chmielewski wrote: Hmm - I added this to imapd.conf: annotation_db: skiplist duplicate_db: skiplist mboxlist_db: skiplist ptscache_db: skiplist quota_db: skiplist seenstate_db: skiplist tlscache_db: skiplist When starting cyrus, I have this: Sep 29

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Pascal Gienger
Le 28 sept. 2010 à 08:50, Tomasz Chmielewski a écrit : Sep 28 01:10:10 omega cyrus/ctl_cyrusdb[21728]: DBERROR db4: Program version 4.2 doesn't match environment version Are you sure on each node the _SAME_ Cyrus version linked to the _SAME_ bdb libs is running? And - just a little side

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Sebastian Hagedorn
--On 28. September 2010 08:50:00 +0200 Tomasz Chmielewski man...@wpkg.org wrote: How do you manage your Cyrus installations highly-available? Check the archives. There have been many discussions regarding this. I though a minimal example could be like below: internet |

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Tomasz Chmielewski
On 28.09.2010 09:13, Pascal Gienger wrote: Le 28 sept. 2010 à 08:50, Tomasz Chmielewski a écrit : Sep 28 01:10:10 omega cyrus/ctl_cyrusdb[21728]: DBERROR db4: Program version 4.2 doesn't match environment version Are you sure on each node the _SAME_ Cyrus version linked to the _SAME_ bdb

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Michael Menge
Quoting Tomasz Chmielewski man...@wpkg.org: How do you manage your Cyrus installations highly-available? I though a minimal example could be like below: internet | server1 - server2 There would be Heartbeat/Pacemaker running on both servers. Its role would be: -

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Tomasz Chmielewski
On 28.09.2010 10:56, Michael Menge wrote: Cyrus depends on locks and mmap, so your fs must support them. I had written a summery of the diskussions about Cyrus and HA in the old wiki. But the wiki was replaced by the new wiki. I will have a look if I have a copy. I would be grateful. If

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Michael Menge
Quoting Tomasz Chmielewski man...@wpkg.org: On 28.09.2010 10:56, Michael Menge wrote: Cyrus depends on locks and mmap, so your fs must support them. I had written a summery of the diskussions about Cyrus and HA in the old wiki. But the wiki was replaced by the new wiki. I will have a look if

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Tomasz Chmielewski
On 28.09.2010 11:55, Michael Menge wrote: Replication was introduced in 2.3.x. There are other features in 2.3.x I don't want to live with out (e.g. delayed expunge). There was a diskussion on the lists about that Debian wants to upgrade cyrus. The main problem is the upgrade path (update of

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Bron Gondwana
On Tue, Sep 28, 2010 at 12:13:14PM +0200, Tomasz Chmielewski wrote: On 28.09.2010 11:55, Michael Menge wrote: Replication was introduced in 2.3.x. There are other features in 2.3.x I don't want to live with out (e.g. delayed expunge). There was a diskussion on the lists about that Debian

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread John Madden
Still, we need to have Cyrus database, mail storage accessible for both servers. I though using glusterfs for it would be a good idea (assuming Cyrus only runs on one of the servers at a given time). IMO, don't use glusterfs for this. I found it to not even be sufficient for a PHP session

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Tomasz Chmielewski
On 28.09.2010 15:01, John Madden wrote: Still, we need to have Cyrus database, mail storage accessible for both servers. I though using glusterfs for it would be a good idea (assuming Cyrus only runs on one of the servers at a given time). IMO, don't use glusterfs for this. I found it to not

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread John Madden
Any other suggestions? There is an alternatives like Ceph[1], but it is just too new (and potentially can have some edge cases). DRBD + GFS/OCFS2 just seem too complex for such setup. If you're doing failover, you don't need a cluster filesystem. You can use just plain DRDB+ext4 if you

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Michael Menge
Quoting Tomasz Chmielewski man...@wpkg.org: On 28.09.2010 15:01, John Madden wrote: Still, we need to have Cyrus database, mail storage accessible for both servers. I though using glusterfs for it would be a good idea (assuming Cyrus only runs on one of the servers at a given time). IMO,

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Andre Felipe Machado
Hello AFAIK, cyrus needs posix file locks and mmap support. GlusterFS needs FUSE and it only supports writable mmap files after kernel 2.6.26 or so. Therefore, you need recent kernel and recent fuse. Further, you need to extremely fine tune your configuration, as the most robust clustered

Re: high-availability Cyrus (i.e. glusterfs)?

2010-09-28 Thread Tomasz Chmielewski
On 28.09.2010 12:55, Bron Gondwana wrote: On Tue, Sep 28, 2010 at 12:13:14PM +0200, Tomasz Chmielewski wrote: On 28.09.2010 11:55, Michael Menge wrote: Replication was introduced in 2.3.x. There are other features in 2.3.x I don't want to live with out (e.g. delayed expunge). There was a

High Availability approaches for Cyrus

2010-03-15 Thread Simpson, John R
Greetings all, I've spent a good deal of time searching the Info-Cyrus archives (and various googled articles) to identify the recommended ways to improve Cyrus availability and reduce disaster recovery time. The two main approaches appear to be Cyrus replication and file system

Re: High Availability approaches for Cyrus

2010-03-15 Thread John Madden
- We have three Cyrus servers, each with a single large mailstore. Would there be a significant advantage to splitting them into multiple smaller mailstores? We’re using Perdition but not Murder / Aggregator. Murder rocks, IMO, well worth the learning curve of the setup. If you're going

Re: High Availability approaches for Cyrus

2010-03-15 Thread Michael Menge
Quoting Simpson, John R john_simp...@reyrey.com: Greetings all, I've spent a good deal of time searching the Info-Cyrus archives (and various googled articles) to identify the recommended ways to improve Cyrus availability and reduce disaster recovery time. The two main

RE: High Availability approaches for Cyrus

2010-03-15 Thread Simpson, John R
-Original Message- From: John Madden [mailto:jmad...@ivytech.edu] Sent: Monday, March 15, 2010 2:07 PM To: Simpson, John R Cc: info-cyrus@lists.andrew.cmu.edu Subject: Re: High Availability approaches for Cyrus - We have three Cyrus servers, each with a single large mailstore

Re: High availability email server...

2006-08-02 Thread Daniel Eckl
Well, as far as I know, the mailboxes.db and other databases are only opened and modified by the master process. But I'm not sure here. But as your assumption sounds correct and because this seems to work with cluster (and I fully believe you here, no question), your assumption regarding the

Re: High availability email server...

2006-08-01 Thread Daniel Eckl
Hi Scott! Your statements cannot be correct by logical reasons. While on file locking level you are fully right, cyrus heavily depends on critical database access where you need application level database locking. As only one master process can lock the database, a second one either cannot

Re: High availability email server...

2006-08-01 Thread Dave McMurtrie
Daniel Eckl wrote: Hi Scott! Your statements cannot be correct by logical reasons. While on file locking level you are fully right, cyrus heavily depends on critical database access where you need application level database locking. As only one master process can lock the database, a

Re: High availability email server...

2006-08-01 Thread Daniel Eckl
Well, I don't have cluster knowledge, and so of course I simply believe you that a good cluster system will never have file locking problems. I already stated this below! But how will the cluster affect application level database locking? That was my primary question and you didn't name this

Re: High availability email server...

2006-08-01 Thread Andrew Morgan
On Tue, 1 Aug 2006, Daniel Eckl wrote: Well, I don't have cluster knowledge, and so of course I simply believe you that a good cluster system will never have file locking problems. I already stated this below! But how will the cluster affect application level database locking? That was my

Re: High availability email server...

2006-07-31 Thread Chris St. Pierre
Michael-- One of the major problems you'd run into is /var/lib/imap, the config directory. It contains, among other things, a Berkeley DB of information about the mail store. GFS, Lustre, and other cluster filesystems do file-level locking; in order to properly read and write to the BDB

RE: High availability email server...

2006-07-31 Thread Andrew Laurence
At 11:49 PM +0200 7/28/06, Pascal Gienger wrote: In the Apple case we need to distinguish Apple XSAN Harddisk chassis and the XSAN software. The XSAN software seem to give you a special filesystem for SAN issues (at least I read this on their webpage). Let me dissect this a bit. The Xserve

Re: High availability email server...

2006-07-31 Thread Andrew Laurence
At 4:18 PM -0400 7/28/06, John Madden wrote: Sorry, please bear with my ignorance, I'm not very informed about NFS, but what's wrong with locking against a real block device? NFS is a file sharing protocol that doesn't provide full locking semantics the way block devices do. Has Cyrus

Re: High availability email server...

2006-07-31 Thread Wil Cooley
On Fri, 2006-07-28 at 15:33 -0700, Andrew Morgan wrote: On Fri, 28 Jul 2006, Rich Graves wrote: My question: So is *anyone* here happy with Cyrus on ext3? We're a small site, only 3200 users, 246GB mail. I'd really rather not try anything more exotic for supportability reasons, but I'm

Re: High availability email server...

2006-07-31 Thread Andrew Morgan
On Mon, 31 Jul 2006, Wil Cooley wrote: How big is your journal? I have instructions for determining the size here, because it's non-obvious: http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes (BTW, you can drop the 'defaults' from the entry in your fstab; 'defaults' exists to fill the column

Re: High availability email server...

2006-07-31 Thread Andrew Morgan
On Mon, 31 Jul 2006, Wil Cooley wrote: Well, 32MB is small for a write-heavy filesystem. But if you're not seeing any problems with kjournald stalling while it flushes, then it might not be worth the trouble of re-creating the journal as a larger size. It's unlikely to hurt anything, but I

Re: High availability email server...

2006-07-31 Thread Wil Cooley
On Mon, 2006-07-31 at 15:40 -0700, Andrew Morgan wrote: On Mon, 31 Jul 2006, Wil Cooley wrote: How big is your journal? I have instructions for determining the size here, because it's non-obvious: http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes (BTW, you can drop the 'defaults'

Re: High availability email server...

2006-07-31 Thread rgraves
Kinda surprising, but it DOES have something to do with Cyrus. Caspur did their case study on cluster filesystems with their e-mail environment. It used Cyrus IMAP and some kind of SMTP (I think it was Postfix or Their paper talks about Maildir. If you connect to mailbox.caspur.it:993 you'll

Re: High availability email server...

2006-07-29 Thread Michael Menge
Hi, Quoting Pascal Gienger [EMAIL PROTECTED]: I would NEVER suggest to mount the cyrus mail spool via NFS, locking is important and for these crucial things I like to have a real block device with a real filesystem, so SANs are ok to me. does someone use lustre as cyrus mail spoll? Would

Re: High availability email server...

2006-07-29 Thread Daniel Eckl
Hi Michael! As already said in this thread: Cyrus cannot share its spool. No 2 cyrus instances can use the same spool, databases and lockfiles. For load balancing you can use a murder setup and for HA you can use replication. Best, Daniel Michael Menge schrieb: Hi, Quoting Pascal Gienger

Re: High availability email server...

2006-07-29 Thread Tom Samplonius
Pascal Gienger wrote: David Korpiewski [EMAIL PROTECTED] wrote: I spent about 6 months fighting with Apple XSAN and Apple OSX mail to try to create a redundant cyrus mail cluster. First of all, don't try it, it is a waste of time. Apple states that mail on an XSAN is not supported. The

Re: High availability email server...

2006-07-28 Thread Chris St. Pierre
Chad-- We've put /var/lib/imap and /var/spool/imap on a SAN and have two machines -- one active, and one hot backup. If the active server fails, the other mounts the storage and takes over. This is not yet in production, but it's a pretty simple setup and can be done without running any

Re: High availability email server...

2006-07-28 Thread David Korpiewski
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to try to create a redundant cyrus mail cluster. First of all, don't try it, it is a waste of time. Apple states that mail on an XSAN is not supported. The reason is that it simply won't run. The Xsan can't handle the

Re: High availability email server...

2006-07-28 Thread kbaker
David Korpiewski wrote: I spent about 6 months fighting with Apple XSAN and Apple OSX mail to try to create a redundant cyrus mail cluster. First of all, don't try it, it is a waste of time. Apple states that mail on an XSAN is not supported. The reason is that it simply won't run. The

Re: High availability email server...

2006-07-28 Thread rgraves
Chris St. Pierre wrote: We've put /var/lib/imap and /var/spool/imap on a SAN and have two machines -- one active, and one hot backup. If the active server fails, the other mounts the storage and takes over. This is not yet Also consider /var/spool/{mqueue,clientmqueue,postfix}. Depending on

RE: High availability email server...

2006-07-28 Thread Heinzmann, Robert
-Ursprüngliche Nachricht- Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von Chris St. Pierre Gesendet: Freitag, 28. Juli 2006 15:12 An: Chad A. Prey Cc: info-cyrus@lists.andrew.cmu.edu Betreff: Re: High availability email

Re: High availability email server...

2006-07-28 Thread Fabio Corazza
Pascal Gienger wrote: There are techniques to handle these situations - for xfs (as an example) consider having *MUCH* RAM in your machine and always mount it with logbufs=8. Is XFS so RAM intensive? I would NEVER suggest to mount the cyrus mail spool via NFS, locking is important and for

Re: High availability email server...

2006-07-28 Thread John Madden
for that, even if they are pretty CPU and I/O intensive (I use it for multimedia sharing - a lot a lot of images that needs to be shared across 4 nodes having Apache to serve them). GFS is a *cluster* filesystem. We're talking about high availability here, not clustering. GFS could certainly be used

Re: High availability email server...

2006-07-28 Thread Michael Johnson
On Jul 28, 2006, at 1:40 PM, Pascal Gienger wrote: So if Apple says that Xsan does not handle many files they admit that their HFS+ file system is crap for many small files. This is completely untrue. Xsan, although branded by Apple, is not completely an Apple product. ADIC makes

Re: High availability email server...

2006-07-28 Thread Rich Graves
Clustered filesystems don't make any sense for Cyrus, since the application itself doesn't allow simultaneous read/write. Just use a normal journaling filesystem and fail over by mounting the FS on the backup server. Consider replication such as DRDB or proprietary SAN replication if you feel

Re: High availability email server...

2006-07-28 Thread Fabio Corazza
Rich Graves wrote: Clustered filesystems don't make any sense for Cyrus, since the application itself doesn't allow simultaneous read/write. Just use a normal journaling filesystem and fail over by mounting the FS on the backup server. Consider replication such as DRDB or proprietary SAN

RE: High availability email server...

2006-07-28 Thread Pascal Gienger
David S. Madole [EMAIL PROTECTED] wrote: That's just not true as a general statement. SAN is a broad term that applies to much more than just farming out block devices. Some of the more sophisticated SANs are filesystem-based, not block-based. This allows them to implement more advanced

Re: High availability email server...

2006-07-28 Thread Rich Graves
Fabio Corazza wrote: Rich Graves wrote: Clustered filesystems don't make any sense for Cyrus, since the application itself doesn't allow simultaneous read/write. Just use a normal journaling filesystem and fail over by mounting the FS on the backup server. Consider replication such as DRDB or

Re: High availability email server...

2006-07-28 Thread Andrew Morgan
On Fri, 28 Jul 2006, Rich Graves wrote: My question: So is *anyone* here happy with Cyrus on ext3? We're a small site, only 3200 users, 246GB mail. I'd really rather not try anything more exotic for supportability reasons, but I'm getting worried that our planned move from Solaris 9/VxFS to

Re: High availability email server...

2006-07-28 Thread Sebastian Hagedorn
-- Rich Graves [EMAIL PROTECTED] is rumored to have mumbled on 28. Juli 2006 15:52:17 -0500 regarding Re: High availability email server...: My question: So is *anyone* here happy with Cyrus on ext3? Yes. We use it on a SAN with a 800 GB partition for /var/spool/imap. -- Sebastian Hagedorn

Re: High availability email server...

2006-07-28 Thread Matthew Schumacher
Andrew Morgan wrote: On Fri, 28 Jul 2006, Rich Graves wrote: My question: So is *anyone* here happy with Cyrus on ext3? We're a small site, only 3200 users, 246GB mail. I'd really rather not try anything more exotic for supportability reasons, but I'm getting worried that our planned move

High availability email server...

2006-07-27 Thread Chad A. Prey
OK...I'm searching for strategies to have a realtime email backup in the event of backend failure. We've been running cyrus-imap for about a year and a half with incredible success. Our failures have all been due to using junky storage. One idea is to have a continuous rsync of the cyrus

Re: High availability email server...

2006-07-27 Thread kbaker
Chad A. Prey wrote: OK...I'm searching for strategies to have a realtime email backup in the event of backend failure. We've been running cyrus-imap for about a year and a half with incredible success. Our failures have all been due to using junky storage. One idea is to have a continuous rsync

Re: High-Availability IMAP server

2005-09-29 Thread Wolfgang Powisch
Scott Adkins wrote: --On Monday, September 26, 2005 6:45 PM +0200 David [EMAIL PROTECTED] wrote: Hello, I have a 'pseudo' High Availability SMTP system consisting in two servers running cyrus 2.2.5. The main problem I have is that only one of the two nodes can access to the mailboxes

Re: High-Availability IMAP server

2005-09-28 Thread David Carter
On Tue, 27 Sep 2005, Patrick Radtke wrote: We made great use of it Monday morning when one of our backend machines failed. Switching to the replica was quite simple and relatively fast (maybe 5 to 10 minutes from deciding to switch to the replica before replica was fully in action) We use

Re: High-Availability IMAP server

2005-09-28 Thread brad
On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote: On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote: On 9/26/05, David [EMAIL PROTECTED] wrote: Is there any way to achieve this goal using cyrus? Which is the best approach to this scenario? Run daily imapsync via cron and a Load

Re: High-Availability IMAP server

2005-09-28 Thread João Assad
brad wrote: On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote: On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote: On 9/26/05, David [EMAIL PROTECTED] wrote: Is there any way to achieve this goal using cyrus? Which is the best approach to this scenario? Run daily imapsync

Re: High-Availability IMAP server

2005-09-28 Thread brad
On Wed, 2005-09-28 at 09:02 +0100, David Carter wrote: We use the replication engine all the time to move users back and forth between systems so that we can patch and upgrade operating systems and/or Cyrus without any user visible downtime. I read the documentation on replication and am

Re: High-Availability IMAP server

2005-09-28 Thread brad
On Wed, 2005-09-28 at 12:41 -0300, João Assad wrote: I too am very interested in this replication solution. Where can I get the src and documentation ? Regards, João Assad This is a good start: http://www-uxsup.csx.cam.ac.uk/~dpc22/cyrus/replication.html Thanks, -- Brad

Re: High-Availability IMAP server

2005-09-28 Thread Ken Murchison
João Assad wrote: brad wrote: On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote: On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote: On 9/26/05, David [EMAIL PROTECTED] wrote: Is there any way to achieve this goal using cyrus? Which is the best approach to this

Re: High-Availability IMAP server

2005-09-28 Thread Ken Murchison
David Carter wrote: On Wed, 28 Sep 2005, brad wrote: I read the documentation on replication and am interested in trying it. I have several servers that run a single domain, but are using virtdomain anyway. I would like to have one virtdomain replica server that serves as a hot spare to

Re: High-Availability IMAP server

2005-09-28 Thread brad
On Wed, 2005-09-28 at 13:45 -0400, Ken Murchison wrote: I haven't tried it, but I've done nothing to purposely break replication of virtdomains. I might give it a try and report back then. Thanks, -- Brad Crotchett, RHCE [EMAIL PROTECTED] Cyrus Home Page:

Re: High-Availability IMAP server

2005-09-27 Thread David Carter
On Mon, 26 Sep 2005, Aaron Glenn wrote: There is replication code in the 2.3 branch; though from what I can tell it hasn't been touched in a few months and makes me wonder if it's being actively developed still. Nevertheless, in my exhaustive search for any and all information on IMAP

Re: High-Availability IMAP server

2005-09-27 Thread Ken Murchison
David Carter wrote: The complication is that there doesn't appear to be anyone left at CMU to release new versions of Cyrus at the moment. Poor Jeffrey Eaton seems to be the last man standing there. My own experience of running things single handed is that it doesn't leave much time for

Re: High-Availability IMAP server

2005-09-27 Thread Patrick Radtke
We are running the replication code in production at Columbia. We made great use of it Monday morning when one of our backend machines failed. Switching to the replica was quite simple and relatively fast (maybe 5 to 10 minutes from deciding to switch to the replica before replica was

High-Availability IMAP server

2005-09-26 Thread David
Hello, I have a 'pseudo' High Availability SMTP system consisting in two servers running cyrus 2.2.5. The main problem I have is that only one of the two nodes can access to the mailboxes in order to keep the integrity of the cyrus databases despite the filesystem (GFS) has support to allow

Re: High-Availability IMAP server

2005-09-26 Thread Aaron Glenn
On 9/26/05, David [EMAIL PROTECTED] wrote: Is there any way to achieve this goal using cyrus? Which is the best approach to this scenario? Run daily imapsync via cron and a Load Balancer forward the requests to the active one? Any help would be appreciated. There is replication code in the

Re: High-Availability IMAP server

2005-09-26 Thread Scott Adkins
--On Monday, September 26, 2005 6:45 PM +0200 David [EMAIL PROTECTED] wrote: Hello, I have a 'pseudo' High Availability SMTP system consisting in two servers running cyrus 2.2.5. The main problem I have is that only one of the two nodes can access to the mailboxes in order to keep

Re: High-Availability IMAP server

2005-09-26 Thread John Madden
Is there any way to achieve this goal using cyrus? Which is the best approach to this scenario? Run daily imapsync via cron and a Load Balancer forward the requests to the active one? Here's my approach: setup heartbeat with two ethernet heartbeats, shared storage (SAN), and pray a bunch that

Re: High-Availability IMAP server

2005-09-26 Thread Ow Mun Heng
On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote: On 9/26/05, David [EMAIL PROTECTED] wrote: Is there any way to achieve this goal using cyrus? Which is the best approach to this scenario? Run daily imapsync via cron and a Load Balancer forward the requests to the active one?

Re: high-availability again

2005-04-15 Thread zorg
-availability for 5000 users I was wondering what is actually the best solution Using murder Idon' t really understand if it can help me. it's purpose is for load balancing. Murder, by itself does not give you high availability. It does give you scalability. but some people on this list seem to use

Re: high-availability again

2005-04-15 Thread zorg
Hi c ould you give me just some more explaination of what is the stage./ files used during LMTP delivery have unique filenames so if i underdstand what you saying. if the stage./ files used during LMTP delivery is the same for all the node of the cluster share the same SAN then there won't be

Re: high-availability again

2005-04-15 Thread Dave McMurtrie
Amos wrote: So y'all are doing active/active? What version of Cyrus? Yes. We're running 2.1.17. Thanks, Dave --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: high-availability again

2005-04-15 Thread Ben Carter
zorg wrote: Hi c ould you give me just some more explaination of what is the stage./ files used during LMTP delivery have unique filenames so if i underdstand what you saying. if the stage./ files used during LMTP delivery is the same for all the node of the cluster share the same SAN then

Re: high-availability again

2005-04-15 Thread Amos
Ben Carter wrote: When we get a chance, we're going to talk to Derrick about getting some cluster support into the std. code. That would be most impressive. I wonder how much Ken's work with 2.3 would fit in with this? --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ:

Re: high-availability again

2005-04-15 Thread Ken Murchison
Amos wrote: Ben Carter wrote: When we get a chance, we're going to talk to Derrick about getting some cluster support into the std. code. That would be most impressive. I wonder how much Ken's work with 2.3 would fit in with this? My code in 2.3 uses the Murder code to keep local copies of

high-availability again

2005-04-14 Thread zorg
Hi I'v seen in the list lot of discussion about availabity but none of them seem to give a complete answers I have been asked to build an high-availability for 5000 users I was wondering what is actually the best solution Using murder Idon' t really understand if it can help me. it's purpose

Re: high-availability again

2005-04-14 Thread Dave McMurtrie
zorg wrote: Hi I'v seen in the list lot of discussion about availabity but none of them seem to give a complete answers I have been asked to build an high-availability for 5000 users I was wondering what is actually the best solution Using murder Idon' t really understand if it can help me. it's

Re: high-availability again

2005-04-14 Thread Amos
We're doing this. We have a 4-node Veritas cluster with all imap data residing on a SAN. Overall it's working quite well. We had to make some very minor cyrus code changes so it'd get along well with Veritas' cluster filesystem. This setup gives us high availability and scalability. What

Re: high-availability again

2005-04-14 Thread Dave McMurtrie
Amos wrote: What sort of changes did you have to make? We just had to change map_refresh() to call mmap() with MAP_PRIVATE instead of MAP_SHARED. Since mmap() is being called with PROT_READ anyway, this doesn't affect the operation of the application since the mapped region can never be

Re: high-availability again

2005-04-14 Thread Ben Carter
Dave McMurtrie wrote: Amos wrote: What sort of changes did you have to make? We just had to change map_refresh() to call mmap() with MAP_PRIVATE instead of MAP_SHARED. Since mmap() is being called with PROT_READ anyway, this doesn't affect the operation of the application since the mapped

Re: high-availability again

2005-04-14 Thread Amos
Ben Carter wrote: Actually, the important code change for any active/active cluster configuration is to make sure the stage./ files used during LMTP delivery have unique filenames across the cluster. There are some other setup differences related to this same issue such as symlinking

Re: Funding Cyrus High Availability

2004-09-20 Thread Paul Dekkers
David Carter wrote: 5. Active/Active designate one of the boxes as primary and identify all items in the datastore that absolutly must not be subject to race conditions between the two boxes (message UUID for example). In addition to implementing the replication needed for #1 modify all

Re: Funding Cyrus High Availability

2004-09-20 Thread David Carter
On Sun, 19 Sep 2004, David Lang wrote: here is the problem. you have a new message created on both servers at the same time. how do you allocate the UID without any possibility of stepping on each other? With a new UIDvalidity you can choose any ordering you like. Of course one of the two

Re: Funding Cyrus High Availability

2004-09-20 Thread David Carter
On Sun, 19 Sep 2004, David Lang wrote: assiming that the simplest method would cost ~$3000 to code I would make a wild guess that the ballpark figures would be 1. active/passive without automatic failover $3k 2. active/passive with automatic failover (limited to two nodes or withing a murder

Re: Funding Cyrus High Availability

2004-09-20 Thread David Lang
On Mon, 20 Sep 2004, David Carter wrote: On Sun, 19 Sep 2004, David Lang wrote: assiming that the simplest method would cost ~$3000 to code I would make a wild guess that the ballpark figures would be 1. active/passive without automatic failover $3k 2. active/passive with automatic failover

Re: Funding Cyrus High Availability

2004-09-20 Thread David Lang
On Mon, 20 Sep 2004, Paul Dekkers wrote: David Carter wrote: 5. Active/Active designate one of the boxes as primary and identify all items in the datastore that absolutly must not be subject to race conditions between the two boxes (message UUID for example). In addition to implementing the

Re: Funding Cyrus High Availability

2004-09-20 Thread David Carter
On Mon, 20 Sep 2004, David Lang wrote: Thanks, this is exactly the type of feedback that I was hopeing to get. so you are saying that #5 is more like $50k-100k and #6 goes up from there If anyone could implement Active-Active for Cyrus from scratch in 100 to 150 hours it would be Ken, but I

RE: Funding Cyrus High Availability

2004-09-19 Thread David Lang
On Fri, 17 Sep 2004 [EMAIL PROTECTED] wrote: From: David Lang [mailto:[EMAIL PROTECTED] Mike, one of the problems with this is that different databases have different interfaces and capabilities. if you design it to work on Oracle then if you try to make it work on MySQL there are going to be

Re: Funding Cyrus High Availability

2004-09-19 Thread David Lang
There are many ways of doing High Availability. This is an attempt to outline the various methods with the advantages and disadvantages. Ken and David (and anyne else who has thoughts on this) please feel free to add to this. I'm attempting to outline them roughly in order of complexity. 1

Re: Funding Cyrus High Availability

2004-09-19 Thread David Carter
On Sun, 19 Sep 2004, David Lang wrote: 5. Active/Active designate one of the boxes as primary and identify all items in the datastore that absolutly must not be subject to race conditions between the two boxes (message UUID for example). In addition to implementing the replication needed for #1

Re: Funding Cyrus High Availability

2004-09-19 Thread David Lang
On Sun, 19 Sep 2004, David Carter wrote: On Sun, 19 Sep 2004, David Lang wrote: 5. Active/Active designate one of the boxes as primary and identify all items in the datastore that absolutly must not be subject to race conditions between the two boxes (message UUID for example). In addition to

Re: Funding Cyrus High Availability

2004-09-19 Thread Jure Pe_ar
On Sun, 19 Sep 2004 00:52:08 -0700 (PDT) David Lang [EMAIL PROTECTED] wrote: Nice review of replication ABC :) Here are my thoughts: 1. Active-Slave replication with manual failover This is really the simplest way to do it. Rsync (and friends) does 90% of the required job here; the only thing

  1   2   >