El abr 22, 2015 12:51 AM, Bron Gondwana br...@fastmail.fm escribió:
On Wed, Apr 22, 2015, at 02:27 PM, Ciro Iriarte wrote:
Interesting, is the use of several instances needed because cyrus cannot
scale with threads in a single instance scenario?
There are two interesting reasons:
1) global
On Wed, Apr 22, 2015, at 11:32 PM, Ciro Iriarte wrote:
Hi Bron, it makes sense from that perspective although it seems to
imply a management nightmare. Do you use any management/automation
(webscale if you want) framework?.
Less than you might imagine :)
We have a single file
of storage
each, replicating to different machines.
I had only used a simple setup, one imap server with several spools
That works fine, but it wont' give you high availability.
Is there some more information somewhere?
Not much unfortunately. We've written about our setup many times, most
On Wed, Apr 22, 2015, at 02:27 PM, Ciro Iriarte wrote:
Interesting, is the use of several instances needed because cyrus
cannot scale with threads in a single instance scenario?
There are two interesting reasons:
1) global locks. There are some - mailboxes.db for example. If you have
, one imap server with several spools
That works fine, but it wont' give you high availability.
Is there some more information somewhere?
Not much unfortunately. We've written about our setup many times, most
recently here:
http://blog.fastmail.com/2014/12/04/standalone-mail-servers/
and more
Hi,
We used cyrus for many years and switch to a proprietary system. We are
juste looking back to cyrus.
I would like to know the status of cyrus and HA:
This documentation seems to consider that replication is edge..
http://cyrusimap.org/docs/cyrus-imapd/2.4.9/install-replication.php
and it has
Am Montag, 20. April 2015, 08:32:52 schrieb Lalot Dominique:
I would like to know the status of cyrus and HA:
This documentation seems to consider that replication is edge..
http://cyrusimap.org/docs/cyrus-imapd/2.4.9/install-replication.php
and it has been written in 2007
Cyrus is a product
On Mon, Apr 20, 2015, at 04:32 PM, Lalot Dominique wrote:
Hi,
We used cyrus for many years and switch to a proprietary system. We
are juste looking back to cyrus. I would like to know the status of
cyrus and HA: This documentation seems to consider that replication is
edge..
On Wed, 29 Sep 2010, Tomasz Chmielewski wrote:
Hmm - I added this to imapd.conf:
annotation_db: skiplist
duplicate_db: skiplist
mboxlist_db: skiplist
ptscache_db: skiplist
quota_db: skiplist
seenstate_db: skiplist
tlscache_db: skiplist
When starting cyrus, I have this:
Sep 29
Le 28 sept. 2010 à 08:50, Tomasz Chmielewski a écrit :
Sep 28 01:10:10 omega cyrus/ctl_cyrusdb[21728]: DBERROR db4: Program version
4.2 doesn't match environment version
Are you sure on each node the _SAME_ Cyrus version linked to the _SAME_ bdb
libs is running?
And - just a little side
--On 28. September 2010 08:50:00 +0200 Tomasz Chmielewski man...@wpkg.org
wrote:
How do you manage your Cyrus installations highly-available?
Check the archives. There have been many discussions regarding this.
I though a minimal example could be like below:
internet
|
On 28.09.2010 09:13, Pascal Gienger wrote:
Le 28 sept. 2010 à 08:50, Tomasz Chmielewski a écrit :
Sep 28 01:10:10 omega cyrus/ctl_cyrusdb[21728]: DBERROR db4: Program version
4.2 doesn't match environment version
Are you sure on each node the _SAME_ Cyrus version linked to the _SAME_ bdb
Quoting Tomasz Chmielewski man...@wpkg.org:
How do you manage your Cyrus installations highly-available?
I though a minimal example could be like below:
internet
|
server1 - server2
There would be Heartbeat/Pacemaker running on both servers. Its role
would be:
-
On 28.09.2010 10:56, Michael Menge wrote:
Cyrus depends on locks and mmap, so your fs must support them.
I had written a summery of the diskussions about Cyrus and HA in the
old wiki. But the wiki was replaced by the new wiki. I will have a look
if I have a copy.
I would be grateful.
If
Quoting Tomasz Chmielewski man...@wpkg.org:
On 28.09.2010 10:56, Michael Menge wrote:
Cyrus depends on locks and mmap, so your fs must support them.
I had written a summery of the diskussions about Cyrus and HA in the
old wiki. But the wiki was replaced by the new wiki. I will have a look
if
On 28.09.2010 11:55, Michael Menge wrote:
Replication was introduced in 2.3.x. There are other features in 2.3.x
I don't want to live with out (e.g. delayed expunge). There was a
diskussion on the lists about that Debian wants to upgrade cyrus.
The main problem is the upgrade path (update of
On Tue, Sep 28, 2010 at 12:13:14PM +0200, Tomasz Chmielewski wrote:
On 28.09.2010 11:55, Michael Menge wrote:
Replication was introduced in 2.3.x. There are other features in 2.3.x
I don't want to live with out (e.g. delayed expunge). There was a
diskussion on the lists about that Debian
Still, we need to have Cyrus database, mail storage accessible for
both servers. I though using glusterfs for it would be a good idea
(assuming Cyrus only runs on one of the servers at a given time).
IMO, don't use glusterfs for this. I found it to not even be sufficient
for a PHP session
On 28.09.2010 15:01, John Madden wrote:
Still, we need to have Cyrus database, mail storage accessible for
both servers. I though using glusterfs for it would be a good idea
(assuming Cyrus only runs on one of the servers at a given time).
IMO, don't use glusterfs for this. I found it to not
Any other suggestions? There is an alternatives like Ceph[1], but it is
just too new (and potentially can have some edge cases).
DRBD + GFS/OCFS2 just seem too complex for such setup.
If you're doing failover, you don't need a cluster filesystem. You can
use just plain DRDB+ext4 if you
Quoting Tomasz Chmielewski man...@wpkg.org:
On 28.09.2010 15:01, John Madden wrote:
Still, we need to have Cyrus database, mail storage accessible for
both servers. I though using glusterfs for it would be a good idea
(assuming Cyrus only runs on one of the servers at a given time).
IMO,
Hello
AFAIK, cyrus needs posix file locks and mmap support.
GlusterFS needs FUSE and it only supports writable mmap files after kernel
2.6.26 or so.
Therefore, you need recent kernel and recent fuse.
Further, you need to extremely fine tune your configuration, as the most robust
clustered
On 28.09.2010 12:55, Bron Gondwana wrote:
On Tue, Sep 28, 2010 at 12:13:14PM +0200, Tomasz Chmielewski wrote:
On 28.09.2010 11:55, Michael Menge wrote:
Replication was introduced in 2.3.x. There are other features in 2.3.x
I don't want to live with out (e.g. delayed expunge). There was a
Greetings all,
I've spent a good deal of time searching the Info-Cyrus archives
(and various googled articles) to identify the recommended ways to improve
Cyrus availability and reduce disaster recovery time. The two main approaches
appear to be Cyrus replication and file system
- We have three Cyrus servers, each with a single large mailstore.
Would there be a significant advantage to splitting them into multiple
smaller mailstores? We’re using Perdition but not Murder / Aggregator.
Murder rocks, IMO, well worth the learning curve of the setup. If
you're going
Quoting Simpson, John R john_simp...@reyrey.com:
Greetings all,
I've spent a good deal of time searching the Info-Cyrus
archives (and various googled articles) to identify the recommended
ways to improve Cyrus availability and reduce disaster recovery
time. The two main
-Original Message-
From: John Madden [mailto:jmad...@ivytech.edu]
Sent: Monday, March 15, 2010 2:07 PM
To: Simpson, John R
Cc: info-cyrus@lists.andrew.cmu.edu
Subject: Re: High Availability approaches for Cyrus
- We have three Cyrus servers, each with a single large mailstore
Well, as far as I know, the mailboxes.db and other databases are only
opened and modified by the master process. But I'm not sure here.
But as your assumption sounds correct and because this seems to work
with cluster (and I fully believe you here, no question), your
assumption regarding the
Hi Scott!
Your statements cannot be correct by logical reasons.
While on file locking level you are fully right, cyrus heavily depends
on critical database access where you need application level database
locking.
As only one master process can lock the database, a second one either
cannot
Daniel Eckl wrote:
Hi Scott!
Your statements cannot be correct by logical reasons.
While on file locking level you are fully right, cyrus heavily depends
on critical database access where you need application level database
locking.
As only one master process can lock the database, a
Well, I don't have cluster knowledge, and so of course I simply believe
you that a good cluster system will never have file locking problems.
I already stated this below!
But how will the cluster affect application level database locking? That
was my primary question and you didn't name this
On Tue, 1 Aug 2006, Daniel Eckl wrote:
Well, I don't have cluster knowledge, and so of course I simply believe you
that a good cluster system will never have file locking problems.
I already stated this below!
But how will the cluster affect application level database locking? That was
my
Michael--
One of the major problems you'd run into is /var/lib/imap, the config
directory. It contains, among other things, a Berkeley DB of
information about the mail store. GFS, Lustre, and other cluster
filesystems do file-level locking; in order to properly read and write
to the BDB
At 11:49 PM +0200 7/28/06, Pascal Gienger wrote:
In the Apple case we need to distinguish Apple XSAN Harddisk chassis
and the XSAN software. The XSAN software seem to give you a special
filesystem for SAN issues (at least I read this on their webpage).
Let me dissect this a bit.
The Xserve
At 4:18 PM -0400 7/28/06, John Madden wrote:
Sorry, please bear with my ignorance, I'm not very informed about NFS,
but what's wrong with locking against a real block device?
NFS is a file sharing protocol that doesn't provide full locking
semantics the way block devices do.
Has Cyrus
On Fri, 2006-07-28 at 15:33 -0700, Andrew Morgan wrote:
On Fri, 28 Jul 2006, Rich Graves wrote:
My question: So is *anyone* here happy with Cyrus on ext3? We're a small
site, only 3200 users, 246GB mail. I'd really rather not try anything more
exotic for supportability reasons, but I'm
On Mon, 31 Jul 2006, Wil Cooley wrote:
How big is your journal? I have instructions for determining the size
here, because it's non-obvious:
http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes
(BTW, you can drop the 'defaults' from the entry in your fstab;
'defaults' exists to fill the column
On Mon, 31 Jul 2006, Wil Cooley wrote:
Well, 32MB is small for a write-heavy filesystem. But if you're not
seeing any problems with kjournald stalling while it flushes, then it
might not be worth the trouble of re-creating the journal as a larger
size. It's unlikely to hurt anything, but I
On Mon, 2006-07-31 at 15:40 -0700, Andrew Morgan wrote:
On Mon, 31 Jul 2006, Wil Cooley wrote:
How big is your journal? I have instructions for determining the size
here, because it's non-obvious:
http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes
(BTW, you can drop the 'defaults'
Kinda surprising, but it DOES have something to do with Cyrus. Caspur
did their case study on cluster filesystems with their e-mail environment.
It used Cyrus IMAP and some kind of SMTP (I think it was Postfix or
Their paper talks about Maildir. If you connect to mailbox.caspur.it:993
you'll
Hi,
Quoting Pascal Gienger [EMAIL PROTECTED]:
I would NEVER suggest to mount the cyrus mail spool via NFS, locking is
important and for these crucial things I like to have a real block
device with a real filesystem, so SANs are ok to me.
does someone use lustre as cyrus mail spoll? Would
Hi Michael!
As already said in this thread: Cyrus cannot share its spool.
No 2 cyrus instances can use the same spool, databases and lockfiles.
For load balancing you can use a murder setup and for HA you can use
replication.
Best,
Daniel
Michael Menge schrieb:
Hi,
Quoting Pascal Gienger
Pascal Gienger wrote:
David Korpiewski [EMAIL PROTECTED] wrote:
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try
to create a redundant cyrus mail cluster. First of all, don't try
it, it
is a waste of time. Apple states that mail on an XSAN is not supported.
The
Chad--
We've put /var/lib/imap and /var/spool/imap on a SAN and have two
machines -- one active, and one hot backup. If the active server
fails, the other mounts the storage and takes over. This is not yet
in production, but it's a pretty simple setup and can be done without
running any
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try to create a redundant cyrus mail cluster. First of all, don't try
it, it is a waste of time. Apple states that mail on an XSAN is not
supported. The reason is that it simply won't run. The Xsan can't
handle the
David Korpiewski wrote:
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try to create a redundant cyrus mail cluster. First of all, don't try
it, it is a waste of time. Apple states that mail on an XSAN is not
supported. The reason is that it simply won't run. The
Chris St. Pierre wrote:
We've put /var/lib/imap and /var/spool/imap on a SAN and have two
machines -- one active, and one hot backup. If the active server
fails, the other mounts the storage and takes over. This is not yet
Also consider /var/spool/{mqueue,clientmqueue,postfix}.
Depending on
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag
von Chris St. Pierre
Gesendet: Freitag, 28. Juli 2006 15:12
An: Chad A. Prey
Cc: info-cyrus@lists.andrew.cmu.edu
Betreff: Re: High availability email
Pascal Gienger wrote:
There are techniques to handle these situations - for xfs (as an
example) consider having *MUCH* RAM in your machine and always mount it
with logbufs=8.
Is XFS so RAM intensive?
I would NEVER suggest to mount the cyrus mail spool via NFS, locking is
important and for
for that, even if
they are pretty CPU and I/O intensive (I use it for multimedia sharing -
a lot a lot of images that needs to be shared across 4 nodes having
Apache to serve them).
GFS is a *cluster* filesystem. We're talking about high availability
here, not clustering. GFS could certainly be used
On Jul 28, 2006, at 1:40 PM, Pascal Gienger wrote:
So if Apple says that Xsan does not handle many files they admit
that their HFS+ file system is crap for many small files.
This is completely untrue. Xsan, although branded by Apple, is not
completely an Apple product. ADIC makes
Clustered filesystems don't make any sense for Cyrus, since the
application itself doesn't allow simultaneous read/write. Just use a
normal journaling filesystem and fail over by mounting the FS on the
backup server. Consider replication such as DRDB or proprietary SAN
replication if you feel
Rich Graves wrote:
Clustered filesystems don't make any sense for Cyrus, since the
application itself doesn't allow simultaneous read/write. Just use a
normal journaling filesystem and fail over by mounting the FS on the
backup server. Consider replication such as DRDB or proprietary SAN
David S. Madole [EMAIL PROTECTED] wrote:
That's just not true as a general statement. SAN is a broad term that
applies to much more than just farming out block devices. Some of the
more sophisticated SANs are filesystem-based, not block-based. This
allows them to implement more advanced
Fabio Corazza wrote:
Rich Graves wrote:
Clustered filesystems don't make any sense for Cyrus, since the
application itself doesn't allow simultaneous read/write. Just use a
normal journaling filesystem and fail over by mounting the FS on the
backup server. Consider replication such as DRDB or
On Fri, 28 Jul 2006, Rich Graves wrote:
My question: So is *anyone* here happy with Cyrus on ext3? We're a small
site, only 3200 users, 246GB mail. I'd really rather not try anything more
exotic for supportability reasons, but I'm getting worried that our planned
move from Solaris 9/VxFS to
-- Rich Graves [EMAIL PROTECTED] is rumored to have mumbled on 28.
Juli 2006 15:52:17 -0500 regarding Re: High availability email server...:
My question: So is *anyone* here happy with Cyrus on ext3?
Yes. We use it on a SAN with a 800 GB partition for /var/spool/imap.
--
Sebastian Hagedorn
Andrew Morgan wrote:
On Fri, 28 Jul 2006, Rich Graves wrote:
My question: So is *anyone* here happy with Cyrus on ext3? We're a
small site, only 3200 users, 246GB mail. I'd really rather not try
anything more exotic for supportability reasons, but I'm getting
worried that our planned move
OK...I'm searching for strategies to have a realtime email backup in
the event of backend failure. We've been running cyrus-imap for about a
year and a half with incredible success. Our failures have all been due
to using junky storage.
One idea is to have a continuous rsync of the cyrus
Chad A. Prey wrote:
OK...I'm searching for strategies to have a realtime email backup in
the event of backend failure. We've been running cyrus-imap for about a
year and a half with incredible success. Our failures have all been due
to using junky storage.
One idea is to have a continuous rsync
Scott Adkins wrote:
--On Monday, September 26, 2005 6:45 PM +0200 David
[EMAIL PROTECTED] wrote:
Hello,
I have a 'pseudo' High Availability SMTP system consisting in two
servers
running cyrus 2.2.5.
The main problem I have is that only one of the two nodes can access
to the
mailboxes
On Tue, 27 Sep 2005, Patrick Radtke wrote:
We made great use of it Monday morning when one of our backend machines
failed. Switching to the replica was quite simple and relatively fast
(maybe 5 to 10 minutes from deciding to switch to the replica before
replica was fully in action)
We use
On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote:
On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote:
On 9/26/05, David [EMAIL PROTECTED] wrote:
Is there any way to achieve this goal using cyrus? Which is the best
approach
to this scenario? Run daily imapsync via cron and a Load
brad wrote:
On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote:
On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote:
On 9/26/05, David [EMAIL PROTECTED] wrote:
Is there any way to achieve this goal using cyrus? Which is the best approach
to this scenario? Run daily imapsync
On Wed, 2005-09-28 at 09:02 +0100, David Carter wrote:
We use the replication engine all the time to move users back and forth
between systems so that we can patch and upgrade operating systems and/or
Cyrus without any user visible downtime.
I read the documentation on replication and am
On Wed, 2005-09-28 at 12:41 -0300, João Assad wrote:
I too am very interested in this replication solution. Where can I get
the src and documentation ?
Regards,
João Assad
This is a good start:
http://www-uxsup.csx.cam.ac.uk/~dpc22/cyrus/replication.html
Thanks,
--
Brad
João Assad wrote:
brad wrote:
On Tue, 2005-09-27 at 08:51 +0800, Ow Mun Heng wrote:
On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote:
On 9/26/05, David [EMAIL PROTECTED] wrote:
Is there any way to achieve this goal using cyrus? Which is the
best approach
to this
David Carter wrote:
On Wed, 28 Sep 2005, brad wrote:
I read the documentation on replication and am interested in trying
it. I have several servers that run a single domain, but are using
virtdomain anyway. I would like to have one virtdomain replica server
that serves as a hot spare to
On Wed, 2005-09-28 at 13:45 -0400, Ken Murchison wrote:
I haven't tried it, but I've done nothing to purposely break replication
of virtdomains.
I might give it a try and report back then.
Thanks,
--
Brad Crotchett, RHCE
[EMAIL PROTECTED]
Cyrus Home Page:
On Mon, 26 Sep 2005, Aaron Glenn wrote:
There is replication code in the 2.3 branch; though from what I can tell
it hasn't been touched in a few months and makes me wonder if it's being
actively developed still. Nevertheless, in my exhaustive search for any
and all information on IMAP
David Carter wrote:
The complication is that there doesn't appear to be anyone left at CMU
to release new versions of Cyrus at the moment. Poor Jeffrey Eaton seems
to be the last man standing there. My own experience of running things
single handed is that it doesn't leave much time for
We are running the replication code in production at Columbia.
We made great use of it Monday morning when one of our backend
machines failed.
Switching to the replica was quite simple and relatively fast (maybe
5 to 10 minutes from deciding to switch to the replica before replica
was
Hello,
I have a 'pseudo' High Availability SMTP system consisting in two servers
running cyrus 2.2.5.
The main problem I have is that only one of the two nodes can access to the
mailboxes in order to keep the integrity of the cyrus databases despite the
filesystem (GFS) has support to allow
On 9/26/05, David [EMAIL PROTECTED] wrote:
Is there any way to achieve this goal using cyrus? Which is the best approach
to this scenario? Run daily imapsync via cron and a Load Balancer forward the
requests to the active one?
Any help would be appreciated.
There is replication code in the
--On Monday, September 26, 2005 6:45 PM +0200 David [EMAIL PROTECTED] wrote:
Hello,
I have a 'pseudo' High Availability SMTP system consisting in two servers
running cyrus 2.2.5.
The main problem I have is that only one of the two nodes can access to the
mailboxes in order to keep
Is there any way to achieve this goal using cyrus? Which is the best approach
to this scenario? Run daily imapsync via cron and a Load Balancer forward the
requests to the active one?
Here's my approach: setup heartbeat with two ethernet heartbeats, shared storage
(SAN), and pray a bunch that
On Mon, 2005-09-26 at 10:03 -0700, Aaron Glenn wrote:
On 9/26/05, David [EMAIL PROTECTED] wrote:
Is there any way to achieve this goal using cyrus? Which is the best
approach
to this scenario? Run daily imapsync via cron and a Load Balancer forward
the
requests to the active one?
-availability for 5000 users
I was wondering what is actually the best solution
Using murder Idon' t really understand if it can help me. it's
purpose is for load balancing.
Murder, by itself does not give you high availability. It does give
you scalability.
but some people on this list seem to use
Hi c
ould you give me just some more explaination of what is the stage./
files used during LMTP delivery have unique filenames
so if i underdstand what you saying. if the stage./ files used during
LMTP delivery is the same for all the node of the cluster share the same
SAN then there won't be
Amos wrote:
So y'all are doing active/active? What version of Cyrus?
Yes. We're running 2.1.17.
Thanks,
Dave
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
zorg wrote:
Hi c
ould you give me just some more explaination of what is the stage./
files used during LMTP delivery have unique filenames
so if i underdstand what you saying. if the stage./ files used during
LMTP delivery is the same for all the node of the cluster share the
same SAN then
Ben Carter wrote:
When we get a chance, we're going to talk to Derrick about getting some
cluster support into the std. code.
That would be most impressive. I wonder how much Ken's work with 2.3
would fit in with this?
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ:
Amos wrote:
Ben Carter wrote:
When we get a chance, we're going to talk to Derrick about getting
some cluster support into the std. code.
That would be most impressive. I wonder how much Ken's work with 2.3
would fit in with this?
My code in 2.3 uses the Murder code to keep local copies of
Hi
I'v seen in the list lot of discussion about availabity but none of them seem
to give a complete answers
I have been asked to build an high-availability for 5000 users
I was wondering what is actually the best solution
Using murder
Idon' t really understand if it can help me. it's purpose
zorg wrote:
Hi
I'v seen in the list lot of discussion about availabity but none of them
seem to give a complete answers
I have been asked to build an high-availability for 5000 users
I was wondering what is actually the best solution
Using murder Idon' t really understand if it can help me. it's
We're doing this. We have a 4-node Veritas cluster with all imap data
residing on a SAN. Overall it's working quite well. We had to make
some very minor cyrus code changes so it'd get along well with Veritas'
cluster filesystem. This setup gives us high availability and scalability.
What
Amos wrote:
What sort of changes did you have to make?
We just had to change map_refresh() to call mmap() with MAP_PRIVATE
instead of MAP_SHARED. Since mmap() is being called with PROT_READ
anyway, this doesn't affect the operation of the application since the
mapped region can never be
Dave McMurtrie wrote:
Amos wrote:
What sort of changes did you have to make?
We just had to change map_refresh() to call mmap() with MAP_PRIVATE
instead of MAP_SHARED. Since mmap() is being called with PROT_READ
anyway, this doesn't affect the operation of the application since the
mapped
Ben Carter wrote:
Actually, the important code change for any active/active cluster
configuration is to make sure the stage./ files used during LMTP
delivery have unique filenames across the cluster.
There are some other setup differences related to this same issue such
as symlinking
David Carter wrote:
5. Active/Active
designate one of the boxes as primary and identify all items in the
datastore that absolutly must not be subject to race conditions
between the two boxes (message UUID for example). In addition to
implementing the replication needed for #1 modify all
On Sun, 19 Sep 2004, David Lang wrote:
here is the problem.
you have a new message created on both servers at the same time. how do you
allocate the UID without any possibility of stepping on each other?
With a new UIDvalidity you can choose any ordering you like. Of course one
of the two
On Sun, 19 Sep 2004, David Lang wrote:
assiming that the simplest method would cost ~$3000 to code I would make a
wild guess that the ballpark figures would be
1. active/passive without automatic failover $3k
2. active/passive with automatic failover (limited to two nodes or withing a
murder
On Mon, 20 Sep 2004, David Carter wrote:
On Sun, 19 Sep 2004, David Lang wrote:
assiming that the simplest method would cost ~$3000 to code I would make a
wild guess that the ballpark figures would be
1. active/passive without automatic failover $3k
2. active/passive with automatic failover
On Mon, 20 Sep 2004, Paul Dekkers wrote:
David Carter wrote:
5. Active/Active
designate one of the boxes as primary and identify all items in the
datastore that absolutly must not be subject to race conditions between
the two boxes (message UUID for example). In addition to implementing
the
On Mon, 20 Sep 2004, David Lang wrote:
Thanks, this is exactly the type of feedback that I was hopeing to get.
so you are saying that #5 is more like $50k-100k and #6 goes up from
there
If anyone could implement Active-Active for Cyrus from scratch in 100 to
150 hours it would be Ken, but I
On Fri, 17 Sep 2004 [EMAIL PROTECTED] wrote:
From: David Lang [mailto:[EMAIL PROTECTED]
Mike, one of the problems with this is that different databases have
different interfaces and capabilities.
if you design it to work on Oracle then if you try to make it work on
MySQL there are going to be
There are many ways of doing High Availability. This is an attempt to
outline the various methods with the advantages and disadvantages. Ken and
David (and anyne else who has thoughts on this) please feel free to add to
this. I'm attempting to outline them roughly in order of complexity.
1
On Sun, 19 Sep 2004, David Lang wrote:
5. Active/Active
designate one of the boxes as primary and identify all items in the
datastore that absolutly must not be subject to race conditions between
the two boxes (message UUID for example). In addition to implementing
the replication needed for #1
On Sun, 19 Sep 2004, David Carter wrote:
On Sun, 19 Sep 2004, David Lang wrote:
5. Active/Active
designate one of the boxes as primary and identify all items in the
datastore that absolutly must not be subject to race conditions between
the two boxes (message UUID for example). In addition to
On Sun, 19 Sep 2004 00:52:08 -0700 (PDT)
David Lang [EMAIL PROTECTED] wrote:
Nice review of replication ABC :)
Here are my thoughts:
1. Active-Slave replication with manual failover
This is really the simplest way to do it. Rsync (and friends) does 90% of
the required job here; the only thing
1 - 100 of 167 matches
Mail list logo