Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-10-01 Thread Jean Charles Delépine
.conversations and .counters files doesn't help. > > Can I, and how can I, get rid of those conversation indexes in order to have > my mailboxes being "as if there never been conversation" ? After removing "conversations: 1" option AND restarted the serveur, ctl_conve

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-26 Thread Jean Charles Delépine
think you'll probably want to upgrade your > 3.0 systems in place as far forward as you can (while staying 3.0), and then > use the replication strategy to upgrade to 3.2 after that. I just do that. My test server is now using 3.0.14 (self build debian package) : >1601118406>APPL

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-24 Thread ellie timoney
ace upgrade ought to be safe -- but please check the release notes carefully for extra steps/considerations you may need to make, depending on your deployment. I think you'll probably want to upgrade your 3.0 systems in place as far forward as you can (while staying 3.0), and then use the replicat

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-24 Thread Jean Charles Delépine
Jean Charles Delépine écrivait (wrote) : > Hello, > > I'm on the way to migrate one quite big murder config with Cyrus IMAP > 3.0.8-Debian-3.0.8-6+deb10u4 > to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1. > > My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has work > before for

Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-23 Thread Jean Charles Delépine
Hello, I'm on the way to migrate one quite big murder config with Cyrus IMAP 3.0.8-Debian-3.0.8-6+deb10u4 to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1. My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has work before for 2.5 to 3.0. migration. Il can replicate empty

Question about replication, split-brains and unclean failover

2020-06-17 Thread ego...@sarenet.es
Hi!, I was writing some code for automating the server fail-over and was trying to see, how should or could I handle not run log files from sync_client. Well when in a clean shutdown, it’s pretty easy to know how to manage because the replication is up-to-date… so almost no problem

Re: Replication and Deleted Files

2020-06-04 Thread Ian Batten via Info-cyrus
On Thu 04 Jun 2020 at 18:57:37, Michael Menge (michael.me...@zdv.uni-tuebingen.de) wrote: > you also need to run cyr_expire on the "new_server" to remove the old expunged mails and deleted folders. Obvious when you try it!    Thanks so much.   Expired 23 and expunged 7617 out of 289060

Re: Replication and Deleted Files

2020-06-04 Thread Michael Menge
Hi, Quoting Ian Batten via Info-cyrus : Hi, long-time Cyrus user (25 years, I think), but stumped on this one… I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am trying to migrate off.  The strategy is to run rolling replication onto the new server (3.0.8-6+deb10u4 on Debian

Replication and Deleted Files

2020-06-04 Thread Ian Batten via Info-cyrus
Hi, long-time Cyrus user (25 years, I think), but stumped on this one… I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am trying to migrate off.  The strategy is to run rolling replication onto the new server (3.0.8-6+deb10u4 on Debian 10.4), and then point the DNS record

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-29 Thread Olaf Frączyk
a hardware entropy generator. Eg. on a PCI-e card. They aren't super cheap but also not too expensive. I think someone with good cryptography knowledge could help you with this topic. I suppose the storage I/O can be a bigger issue here. I've also noticed that replication documentation is a bit

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-29 Thread Andrzej Kwiatkowski
W dniu 22.04.2020 o 10:19, Olaf Frączyk pisze: > On 2020-04-22 09:16, Andrzej Kwiatkowski wrote: >> W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze: >>> Hi, >>> >>> I'm running 3.0.5. >>> >>> I want to migrate to a new machine. I set

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-22 Thread Olaf Frączyk
On 2020-04-22 09:16, Andrzej Kwiatkowski wrote: W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze: Hi, I'm running 3.0.5. I want to migrate to a new machine. I set up cyrus-imapd 3.0.13. The replication started but it didn't transfer all mails. The store isn't big 44GB, transferred was about 24

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-22 Thread Olaf Frączyk
this pretty much covered -- you need to disable the rolling replication for now, and then use sync_client -u (or if you're brave, sync_client -A) to get an initial sync of everything. These two options work entire-user-at-a-time, so they should detect and fix the problems introduced by the partial

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-22 Thread Olaf Frączyk
you use replication - are sieve scripts replicated as well? There is -s option called sieve mode but it needs to specify which users' files are to replicate and there is written that it is mostly for debugging. Yes, sieve scripts are replicated. The way the rolling replication works

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-22 Thread Andrzej Kwiatkowski
W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze: > Hi, > > I'm running 3.0.5. > > I want to migrate to a new machine. I set up cyrus-imapd 3.0.13. > > The replication started but it didn't transfer all mails. > > The store isn't big 44GB, transferred was about 24

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread ellie timoney
I think Michael's got this pretty much covered -- you need to disable the rolling replication for now, and then use sync_client -u (or if you're brave, sync_client -A) to get an initial sync of everything. These two options work entire-user-at-a-time, so they should detect and fix the problems

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge
Quoting Olaf Frączyk : Yes, at the beginning I was also thinking if initial sync is necessary, but there was nothing in docs about it, something started replicating and I simply assumed it does initial resync. I'll try it this evening. :) Since you use replication - are sieve scripts

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
On 2020-04-21 16:00, Michael Menge wrote: Hi, Quoting Olaf Frączyk : I managed to get strace on both sides, however it doesn't make me wiser - there is nothing obvious for me. Additionally I see that replication works more or less for new messages, but older are not processed. I have

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge
Hi, Quoting Olaf Frączyk : I managed to get strace on both sides, however it doesn't make me wiser - there is nothing obvious for me. Additionally I see that replication works more or less for new messages, but older are not processed. I have several subfolders in my mailbox, some

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
I managed to get strace on both sides, however it doesn't make me wiser - there is nothing obvious for me. Additionally I see that replication works more or less for new messages, but older are not processed. I have several subfolders in my mailbox, some of them unreplicated. If I change

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge
Quoting Olaf Frączyk : Thank you for the telemetry hint :) I don't use the syncserver - the replication is done via IMAP port on the replica side. I have no idea how to have strace spawned by cyrus master process. When I attach later to imapd using strace -p I'm afraid some info already

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
Thank you for the telemetry hint :) I don't use the syncserver - the replication is done via IMAP port on the replica side. I have no idea how to have strace spawned by cyrus master process. When I attach later to imapd using strace -p I'm afraid some info already will be lost

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge
Olaf Frączyk : Hi, I upgraded to 3.0.13 but it didn't help. This time it copied about 18GB in the logs I still see: 1 - inefficient replication 2 - IOERROR: zero length response to MAILBOXES (idle for too long) IOERROR: zero length response to RESTART (idle for too long) Error in do_sync

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
START section to SERVICES, it seems that it is not automatically restarted On 2020-04-21 08:47, Michael Menge wrote: Hi Olaf Quoting Olaf Frączyk : Hi, I upgraded to 3.0.13 but it didn't help. This time it copied about 18GB in the logs I still see: 1 - inefficient replication 2 - IOERROR

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
to SERVICES, it seems that it is not automatically restarted On 2020-04-21 08:47, Michael Menge wrote: Hi Olaf Quoting Olaf Frączyk : Hi, I upgraded to 3.0.13 but it didn't help. This time it copied about 18GB in the logs I still see: 1 - inefficient replication 2 - IOERROR: zero length

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
in the logs I still see: 1 - inefficient replication 2 - IOERROR: zero length response to MAILBOXES (idle for too long) IOERROR: zero length response to RESTART (idle for too long) Error in do_sync(): bailing out! Bad protocol But I have no idea what can I do next and why it fails Apr 21 02:24:46

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge
Hi Olaf Quoting Olaf Frączyk : Hi, I upgraded to 3.0.13 but it didn't help. This time it copied about 18GB in the logs I still see: 1 - inefficient replication 2 - IOERROR: zero length response to MAILBOXES (idle for too long) IOERROR: zero length response to RESTART (idle for too long

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-20 Thread Olaf Frączyk
Hi, I upgraded to 3.0.13 but it didn't help. This time it copied about 18GB in the logs I still see: 1 - inefficient replication 2 - IOERROR: zero length response to MAILBOXES (idle for too long) IOERROR: zero length response to RESTART (idle for too long) Error in do_sync(): bailing out

Replication failed 3.0.5 -> 3.0.13

2020-04-20 Thread Olaf Frączyk
Hi, I'm running 3.0.5. I want to migrate to a new machine. I set up cyrus-imapd 3.0.13. The replication started but it didn't transfer all mails. The store isn't big 44GB, transferred was about 24 GB. In the logs I see: Apr 20 14:54:03 ifs sync_client[24239]: couldn't authenticate

Re: Replication - current status and how to do failover

2020-04-07 Thread Bron Gondwana
On Sun, Apr 5, 2020, at 00:45, Olaf Frączyk wrote: > Hello, > > 1. Is currently master-master replication possible (maybe 3.2) Is it OK > to sync them two-way? No, not really. It'll mostly be fine, but it doesn't (yet) handle folder create/rename/delete safely. > If yes

Replication - current status and how to do failover

2020-04-04 Thread Olaf Frączyk
Hello, 1. Is currently master-master replication possible (maybe 3.2) Is it OK to sync them two-way? If yes - how to set up such config? 2. If master-master is impossible, is there any guide how to setup failover from master to slave and possibly back? If split-brain happens

Re: Public calendars and addressbooks (was RE: Backup compaction optimization in a block-level replication environment)

2019-11-21 Thread ellie timoney
On Wed, Nov 20, 2019, at 4:41 PM, Deborah Pickett wrote: > > I'm curious how these are working for you, or what sort of configuration > > and workflows leads to having #calendars and #addressbooks as top-level > > shared mailboxes?  I've only very recently started learning how our DAV bits > >

Public calendars and addressbooks (was RE: Backup compaction optimization in a block-level replication environment)

2019-11-19 Thread Deborah Pickett
> I'm curious how these are working for you, or what sort of configuration and workflows leads to having #calendars and #addressbooks as top-level shared mailboxes?  I've only very recently started learning how our DAV bits work (they have previously been black-boxes for me), and so far have only

Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread ellie timoney
On Wed, Nov 20, 2019, at 11:06 AM, Deborah Pickett wrote: > On 2019-11-20 10:03, ellie timoney wrote: >>> foo also includes "#calendars" and "#addressbooks" on my server so there are weird characters to deal with. >>> >> Now that's an interesting detail to consider. >> > I should restate my

Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread Deborah Pickett
On 2019-11-20 10:03, ellie timoney wrote: foo also includes "#calendars" and "#addressbooks" on my server so there are weird characters to deal with. Now that's an interesting detail to consider. I should restate my original message because I'm being fast and loose with the meaning of

Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread ellie timoney
On Tue, Nov 19, 2019, at 9:38 AM, Deborah Pickett wrote: > > Food for thought. Maybe instead of having one "%SHARED" backup, having one > > "%SHARED.foo" backup per top-level shared folder would be a better > > implementation? I haven't seen shared folders used much in practice, so > > it's

Cyrus 3, automation and master-master replication and mailbox movement

2019-11-19 Thread Egoitz Aurrekoetxea via Info-cyrus
ove a mailbox from a partition (a renm to different partition) to another one… we usually do : - stop replication between master/slave (as a safety measure for having a very last “fall back” if the renm goes wrong). You know, promoting the slave to master would have the mailbox of the failed renaming

Re: Backup compaction optimization in a block-level replication environment

2019-11-18 Thread Deborah Pickett
Food for thought. Maybe instead of having one "%SHARED" backup, having one "%SHARED.foo" backup per top-level shared folder would be a better implementation? I haven't seen shared folders used much in practice, so it's interesting to hear about it. Looking at your own data, if you had one

Re: Cyrus doesn't preserve hard-links on replication

2019-11-18 Thread Adrien Remillieux
g, please edit your Subject line so it is more specific > than "Re: Contents of Info-cyrus digest..." > > > Today's Topics: > >1. Cyrus doesn't preserve hard-links on replication > (Adrien Remillieux) > > > --

Re: Cyrus doesn't preserve hard-links on replication

2019-11-18 Thread Adrien Remillieux
on replication ! Le dim. 17 nov. 2019 à 18:00, a écrit : > Send Info-cyrus mailing list submissions to > info-cyrus@lists.andrew.cmu.edu > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus > or, via ema

Re: Backup compaction optimization in a block-level replication environment

2019-11-17 Thread ellie timoney
> Related: I had to apply the patch described in > (https://www.mail-archive.com/info-cyrus@lists.andrew.cmu.edu/msg47320.html), > "backupd IOERROR reading backup files larger than 2GB", because during > initial population of my backup, chunks tended to by multiple GB in size > (my %SHARED user

Cyrus doesn't preserve hard-links on replication

2019-11-17 Thread Adrien Remillieux
Hello, I set up replication between two cyrus servers (master runs 2.5.10 and slave 3.0.8) with plans to decommission the old server once everything is working. I noticed that the mail spool takes 950GB instead of ~300GB on the old server. I suspected the hardlinks for message deduplication

Re: Backup compaction optimization in a block-level replication environment

2019-11-15 Thread Deborah Pickett
Further progress report: with small chunks, compaction takes about 15 times longer.  It's almost as if there is an O(n^2) complexity somewhere, looking at the rate that the disk file grows.  (Running perf on a compaction suggests that 90% of the time ctl_backups is doing compression,

Re: Backup compaction optimization in a block-level replication environment

2019-11-14 Thread Deborah Pickett
On 2019-11-11 11:10, ellie timoney wrote: This setting might be helpful: Thanks, I saw that setting but didn't really think through how it would help me.  I'll experiment with it and report back. That would be great, thanks! Progress report: I started with very large chunks (minimum 64 MB,

Re: Backup compaction optimization in a block-level replication environment

2019-11-10 Thread ellie timoney
tition, reconstruct, and it comes back as a new message (unread, no flags, etc)? You would need to be careful of the window between delivery of a message, replication to the replica, and deletion of the message (and replication of the deletion), to ensure you get a backup of the state where the message

Re: Backup compaction optimization in a block-level replication environment

2019-11-07 Thread Deborah Pickett
On 2019-11-08 09:13, ellie timoney wrote: I'm not sure if I'm just not understanding, but if the chunk offsets were to remain the same, then there's no benefit to compaction? A (say) 2gb file full of zeroes between small chunks is still the same 2gb on disk as one that's never been compacted

Re: Backup compaction optimization in a block-level replication environment

2019-11-07 Thread ellie timoney
nc > protocol.  What it does do well is block-level backups: if only a part > of a file has changed, only that part needs to be transferred over the > slow link.  [I haven't decided whether my technology will be the rsync > --checksum protocol, or Synology NAS XFS replication, or Micr

Backup compaction optimization in a block-level replication environment

2019-11-06 Thread Deborah Pickett
has changed, only that part needs to be transferred over the slow link.  [I haven't decided whether my technology will be the rsync --checksum protocol, or Synology NAS XFS replication, or Microsoft Server VFS snapshots.  They all do block-level backups well.] Since Cyrus backup files are append

Re: Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-17 Thread Adrien Remillieux
Thanks ! I'll look into it. Le lun. 16 sept. 2019 à 18:01, a écrit : > Date: Sun, 15 Sep 2019 13:04:31 -0600 > From: Scott Lambert > To: info-cyrus@lists.andrew.cmu.edu > Subject: Re: Possible issue when upgrading to cyrus 3.0.8 using > replication ? > Message-ID: &g

Re: Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-12 Thread ellie timoney
Hi Adrien, The replication upgrade path should be okay. In-place upgrades (that would use the affected reconstruct to bring mailboxes up to the same version as the server) would get bitten. Whereas if you replicate to a newer version server, the mailboxes on the replica will be created

Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-11 Thread Adrien Remillieux
Hello, I have a server that I can't update running cyrus 2.5.10 which contain mailboxes that have existed from 2.3 and earlier (around 300Gb total). My plan is to update by enabling replication with a new server running Debian Buster (so cyrus 3.0.8) and then shutting down the old server

Re: Issues with replication and folder/Sieve subscription

2019-07-18 Thread Egoitz Aurrekoetxea
ned> www.sarenet.es <http://www.sarenet.es/> Antes de imprimir este correo electrónico piense si es necesario hacerlo. > El 10 jul 2019, a las 10:03, Egoitz Aurrekoetxea escribió: > > The subject of this email is not properly set… it should be . Issues in > replication

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Egoitz Aurrekoetxea
It would be better to just talk daemons IMHO…. That should work… but sometimes… perhaps another things could be implied… so… the most clear way, should be to just put as a normal client with a expect script for instance all the source scripts to Cyrus… but using sieveshell… and not using files

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Egoitz Aurrekoetxea
The subject of this email is not properly set… it should be . Issues in replication with folder subscription and Sieve As I think I discovered something I reopen a new thread with the title properly Egoitz Aurrekoetxea Dpto. de sistemas 944 209 470 Parque Tecnológico. Edificio 103 48170

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Albert Shih
Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit > By the way, for your case I would recommend doing a script that does a get > from > dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a > much > more cleaner way… Yes, it is what I did, before I try de sync I

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Albert Shih
Le 09/07/2019 à 22:44:19+0200, Egoitz Aurrekoetxea a écrit Hi, > > If instead of -A you used -u for each of your users did it worked? Or did it If I remember correctly (but I not sure) this is how I find out the problem. Try time -A, notice it crash, so try -u first_user, notice it work

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
By the way, for your case I would recommend doing a script that does a get from dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a much more cleaner way… Cheers! Egoitz Aurrekoetxea Dpto. de sistemas 944 209 470 Parque Tecnológico. Edificio 103 48170 Zamudio (Bizkaia)

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Hi Albert, If instead of -A you used -u for each of your users did it worked? Or did it crashed in the same user as with -A?. Which Cyrus version were you running?. Cheers, Egoitz Aurrekoetxea Dpto. de sistemas 944 209 470 Parque Tecnológico. Edificio 103 48170 Zamudio (Bizkaia)

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Albert Shih
Le 09/07/2019 à 14:10:49+0200, Egoitz Aurrekoetxea a écrit > Good morning, > > > After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas > didn't > have some folders (or all) subscribed the same way they had in previous env in > Cyrus 2.3. Same happened for some users with Sieve

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Could perhaps, some of this something to do, with having intermediate folders subscribed/unsubscribed in the middle of the tree? And that to cause something like it that perhaps is not caused when the mailbox is being accessed by the user instead of being replicated (when the change is applied

Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Good morning, After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas didn't have some folders (or all) subscribed the same way they had in previous env in Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the content itself was perfectly copied. It was

Question about what is replicated with Cyrus replication and what isn't

2019-02-17 Thread egoitz
Hi! Previously (in 2.3 and older versions), cyr_expire and ipurge actions for instance where not replicated to the slave. So, you needed to launch them in both, the master and the slave. My question is, are now replicated as mailbox replication commands?. What about commands like Squatter -F

Re: Big problem with replication

2019-01-16 Thread Albert Shih
Le 16/01/2019 à 17:10:30+0100, Egoitz Aurrekoetxea a écrit > Good afternoon, > > > I would try doing it user by user (with -u). This way you would have all > synced > except the problematic mailbox. Hi, thanks for the help. I got some progress in my problem : > [root@imap-mirror-p

Re: Big problem with replication

2019-01-16 Thread Egoitz Aurrekoetxea
[1] Antes de imprimir este correo electrónico piense si es necesario hacerlo. El 16-01-2019 16:15, Albert Shih escribió: > Hi everyone. > > I've got some big issue with replication. > > I've > > master --- replica ---> slave_1 --- replica ---> slave_2 > &g

Big problem with replication

2019-01-16 Thread Albert Shih
Hi everyone. I've got some big issue with replication. I've master --- replica ---> slave_1 --- replica ---> slave_2 The replication between master and slave_1 work nice. Between slave_1 and slave_2 I've got some issue (log to big after network failure and work nagios_super

Re: Question about manual replication (-u )

2019-01-08 Thread Egoitz Aurrekoetxea
Thanks a lot Bron!!! :) :) --- EGOITZ AURREKOETXEA Departamento de sistemas 944 209 470 Parque Tecnológico. Edificio 103 48170 Zamudio (Bizkaia) ego...@sarenet.es www.sarenet.es [1] Antes de imprimir este correo electrónico piense si es necesario hacerlo. El 08-01-2019 12:02, Bron

Re: Question about manual replication (-u )

2019-01-08 Thread Bron Gondwana
Yep, that's totally safe. Even doing the same user twice at the same time should be safe, though it may do extra work. Bron. On Tue, Jan 8, 2019, at 05:05, Egoitz Aurrekoetxea wrote: > Good afternoon, > > I know it seems a pretty stupid question, but some time ago, you cannot have > a

Re: Question about manual replication (-u )

2019-01-07 Thread Egoitz Aurrekoetxea
Good afternoon, I know it seems a pretty stupid question, but some time ago, you cannot have a Cyrus server acting for instance as a master and as slave... it was not supported... worked... but not supported... so having multiple sync_client instances... perhaps could damage something (although

Re: Simple replication question

2018-11-15 Thread Nic Bernstein
On 11/15/18 2:16 AM, Zorg wrote: I ve one cyrus imap server I want to create a replicated one I have read the documentation but nothing  explain how two start the first replication If my slave master is empty how can i synchronise them the first time Once you've got replication configured

Simple replication question

2018-11-15 Thread Zorg
Hello I ve one cyrus imap server I want to create a replicated one I have read the documentation but nothing  explain how two start the first replication If my slave master is empty how can i synchronise them the first time Thanks Cyrus Home Page: http://www.cyrusimap.org/ List

master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
each on >two locations. We have a total of ~44000 accounts, ~457000 Mailboxes, >and 2x6.5 TB Mails > >Each server is running 3-4 instances. One frontend, two backend/replic >and on one of the servers the cyrus mupdate master. Each Server on one >location is paired with one server o

Re: master-master replication

2018-09-13 Thread Michael Menge
of ~44000 accounts, ~457000 Mailboxes, and 2x6.5 TB Mails Each server is running 3-4 instances. One frontend, two backend/replic and on one of the servers the cyrus mupdate master. Each Server on one location is paired with one server on the other location for replication so in normal operation o

Re: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
regards. >Четверг, 13 сентября 2018, 13:22 +05:00 от Michael Menge >: > >Hi, > >This setup is NOT SUPPORTED and WILL BREAK if the replication process >is triggered >from the wrong server (user is active on both servers, user switched >from one server >to the other

Re: Re[3]: master-master replication

2018-09-13 Thread Michael Menge
Hi, This setup is NOT SUPPORTED and WILL BREAK if the replication process is triggered from the wrong server (user is active on both servers, user switched from one server to the other while the sync-log file is still processed, after split brain) and some mailboxes have been subscribed

Re[3]: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
Sorry! Previous message was sent by mistake. For example, I can configure both servers as follows. Server A. - /etc/cyrus.conf START { ... syncclient       cmd="sync_client -r" ... } SERVICES { ... syncserver       cmd="sync_server" listen="csync" ... } /etc/imapd.conf ...

Re[2]: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
For example, on server A  -- Evgeniy Kononov Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ To Unsubscribe: https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re[2]: master-master replication

2018-09-12 Thread Evgeniy Kononov via Info-cyrus
wait, but if I create a folder on the master, it perfectly syncs to the replica. Also when I delete the folder on the master, it is also deleted on the replica. It means that information about subscriptions to folders is transmitted when synchronizing. but it works only if the client is the

Re: master-master replication

2018-09-12 Thread Michael Menge
master-master replication. Because of CONDSTORE (https://tools.ietf.org/html/rfc4551) Cyrus is able to handle messages on a master-master setup, but the information about folder operations is not tracked and so cyrus is unable to distinguish is a folder was subscribed on one server, or if a folder

master-master replication

2018-09-12 Thread Evgeniy Kononov via Info-cyrus
Hello! I have two servers with cyrus-imapd  cyrus-imapd-2.5.8-13.3.el7.centos.kolab_16.x86_64 One server as master and second as replica. All worked fine when users login on master server, but when I temporary move users on replica I found some trouble Messages synchronisation from replica to

Re: switch master/slve replication

2018-06-27 Thread Antonio Conte
* 27/06/2018, Bron Gondwana wrote : >Yep, that will be enough. The only thing it might not catch is if >there are users on the replica which aren't present on the master (for >whatever reason)... in that case, they will remain on the replica >still. ok I check, but I don't think

Re: switch master/slve replication

2018-06-27 Thread Bron Gondwana
t; I have to switch a master/replica in rolling replication. > > How I can be sure the replication is totally done ? it's > enough to run > > /usr/lib/cyrus/sync_client -v -A > > from master ? and then, which operation I have to do ? > > thanks in advance >

switch master/slve replication

2018-06-26 Thread Antonio Conte
hi all, I have to switch a master/replica in rolling replication. How I can be sure the replication is totally done ? it's enough to run /usr/lib/cyrus/sync_client -v -A from master ? and then, which operation I have to do ? thanks in advance -- Never try to teach a pig to sing. It wastes

Re: Backup vs replication

2018-05-20 Thread ellie timoney
Hi Albert, The main logical difference between ordinary replication and the experimental backup system in Cyrus 3.0 is that in a replicated system, the replica is a copy of the account's current state (as of the last replication). The backup system is a historical record, not just a current

Backup vs replication

2018-05-18 Thread Albert Shih
Hi everyone, I'm not sure I really understand what's the benefice of backup (cyrus>3.x) vs replication ? Is the main goal are to save disk space with compression ? Less inode (with large file) ? I believe the add backup feature to cyrus-imapd was/still very lots of work. So what's the advant

Re: Annotation replication with sync_client -u seems busted in 2.4.X

2018-03-20 Thread ellie timoney
in the 2.5 code that replicates folder annotations. > > Annotation replication does work in rolling replication mode. > > Or have I busted it with other mods I make? > > Patch attached that fixes it for me. > > John Capo > > Cyrus Home Page: http://

Annotation replication with sync_client -u seems busted in 2.4.X

2018-03-20 Thread John Capo
Replicating annotations when sync_client -u is used to move mailboxes to a different server does not work in 2.4.20 and probably not in 2.5.X either. At lest I can't find anyplace in the 2.5 code that replicates folder annotations. Annotation replication does work in rolling replication mode

Synchronous Replication reg.

2017-08-21 Thread Anant Athavale
Dear Members, I am already running Cyrus-IMAP for storing mailboxes with quota and sieve filter features on RHLE 7. I have the following requirement. 1. The same mailboxes also should be accessible from another site and that site also should run Cyrus-IMAP ( a kind of replication). 2

Re: What is the state of Master-Master replication?

2017-07-28 Thread Michael Sofka
On 07/26/2017 04:54 PM, Michael Sofka wrote: A while back there was some discussion of supporting Master-Master replication in Cyrus. I'm busy updating from 2.4.17 to 3.0.2. What is the state of Master-Master, as opposed to Master-Replica replications? My current configuration is a Murder

What is the state of Master-Master replication?

2017-07-26 Thread Michael Sofka
A while back there was some discussion of supporting Master-Master replication in Cyrus. I'm busy updating from 2.4.17 to 3.0.2. What is the state of Master-Master, as opposed to Master-Replica replications? My current configuration is a Murder cluster with three front-end servers, two back

Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Bron Gondwana via Info-cyrus
user with a different uniqueid. You shouldn't be creating users on replicas. Bron. On Tue, 23 Aug 2016, at 10:38, Tod A. Sandman via Info-cyrus wrote: > I resorted to deleting the mailbox on the replication slave and trying to > start from scratch, but I get nowhere. > > On slave

Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Tod A. Sandman via Info-cyrus
I resorted to deleting the mailbox on the replication slave and trying to start from scratch, but I get nowhere. On slave: cyrus@cyrus2c:~> cyradm --user mailadmin `hostname` cyrus2c.mail.rice.edu> dm user/lamemm7 cyrus2c.mail.rice.edu> cm --partition cyrus2g use

Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Bron Gondwana via Info-cyrus
What do you see in syslog? (both for the reconstruct and later when the sync_client runs) On Tue, 23 Aug 2016, at 03:49, Tod A. Sandman via Info-cyrus wrote: > I'm using rolling replication with cyrus-imapd-2.5.9. sync_client died and I > am not able to get replication working again.

Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Tod A. Sandman via Info-cyrus
I'm using rolling replication with cyrus-imapd-2.5.9. sync_client died and I am not able to get replication working again. I've narrowed it down to one mailbox, user.lamemm7, and I've successfully reconstructed the mailbox on both replication partners with various options

Re: Replication CRC error

2016-03-06 Thread Bron Gondwana via Info-cyrus
dently.” Source: > https://cyrusimap.org/imap/admin/sop/replication.html > > But how in this case restore replication for shared folders? Run sync > for each folder separately? > > On Wed, Mar 2, 2016, 6:01 PM Konstantin Udalov via Info-cyrus cy...@lists.andrew.cmu.edu> wrote: >>

Re: Replication CRC error

2016-03-02 Thread Artyom Aleksandrov via Info-cyrus
restore replication for shared folders? Run sync for each folder separately? On Wed, Mar 2, 2016, 6:01 PM Konstantin Udalov via Info-cyrus < info-cyrus@lists.andrew.cmu.edu> wrote: > Hello. > > I configured IMAP rolling replication from master to slave as was > described in https://

Replication CRC error

2016-03-02 Thread Konstantin Udalov via Info-cyrus
Hello. I configured IMAP rolling replication from master to slave as was described in https://cyrusimap.org/imap/admin/sop/replication.html And everything was fine. Until some strange records appear into logs. Here is one of them. Mar 2 04:48:55 hostname cyrus/sync_client[725]: MAILBOX

replication with virtual domains

2016-02-04 Thread Michael Plate via Info-cyrus
Hi, I'm going to sync between a Gentoo (master) and a Debian 8 machine (replica), Gentoo on Cyrus 2.4.17, Debian on "testing" Cyrus 2.4.18 because of broken sync in "stable" (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=799724). I am using virtual domains. I've tried synctest from the

Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Marcus Schopen via Info-cyrus
Am Montag, den 14.12.2015, 07:31 -0400 schrieb Patrick Boutilier via Info-cyrus: > On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote: > > Am Freitag, den 11.12.2015, 19:10 +0100 schrieb Marcus Schopen via > > Info-cyrus: > >> Hi, > >> > >> I have a problem with a single mailbox. The

Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Michael Menge via Info-cyrus
Hi, Quoting Patrick Boutilier via Info-cyrus : On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote: Am Freitag, den 11.12.2015, 19:10 +0100 schrieb Marcus Schopen via Info-cyrus: Hi, I have a problem with a single mailbox. The user's Outlook crashed

Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Marcus Schopen via Info-cyrus
Hi, Am Montag, den 14.12.2015, 12:53 +0100 schrieb Michael Menge via Info-cyrus: > Hi, > > > Quoting Patrick Boutilier via Info-cyrus : > > > On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote: > >> Am Freitag, den 11.12.2015, 19:10 +0100 schrieb

  1   2   3   4   5   6   7   >