Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-10-01 Thread Jean Charles Delépine
Jean Charles Delépine  écrivait (wrote) :

> Jean Charles Delépine  écrivait (wrote) :
> 
> > Hello,
> > 
> > I'm on the way to migrate one quite big murder config with Cyrus IMAP
> > 3.0.8-Debian-3.0.8-6+deb10u4
> > to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1.
> > 
> > My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has work
> > before for 2.5
> > to 3.0. migration.
> > 
> > Il can replicate empty mailboxes. So the conf (attached) seems ok.
> > But if the mailbox isn't empty here's the result (completes traces 
> > attached) :
> 
> The conf might not be _that_ ok.
> 
> If the option conversation is set to 1 and I create new mailboxes, ond
> send mails to this mailboxes, no sync of these mailboxes is possible.
> 
> If I remove the option 'conversation: 1', any new populated mailboxes can 
> be sync.
> 
> So, the problem seems to be in this option.
> 
> But even after removing "conversation: 1" and zeroed conversation indexes
> (ctl_conversationsdb -z), the old "bad" mailboxes can't be synced. Even
> removing their .conversations and .counters files doesn't help.
> 
> Can I, and how can I,  get rid of those conversation indexes in order to have
> my mailboxes being "as if there never been conversation" ?

After removing "conversations: 1" option AND restarted the serveur,
ctl_conversationsdb -z did correctly remove conversations indexes and
the replication finally works fine.

I did have to revert xapian usage (which needs conversations) and switch
back to squat indexes.

Jean Charles Delépine

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-26 Thread Jean Charles Delépine
ellie timoney  écrivait (wrote) :

> On Thu, 24 Sep 2020, at 1:44 AM, Jean Charles Delépine wrote:
> > Is this a known problem corrected after 3.0.9 ?
> 
> Off the top of my head I no longer remember, but the current release in the 
> 3.0 series is 3.0.14.  I'd suggest, if you haven't already, that you look in 
> the release notes from 3.0.8-3.0.14 and see if anything looks relevant.  
> They're here: 
> https://www.cyrusimap.org/imap/download/release-notes/3.0/index.html

I didn't find anything relevant.

> We don't generally recommend in-place upgrades between series (so, upgrading 
> in-place from 3.0.8 to 3.2.3 is not something we'd recommend).  
> Within-series, an in-place upgrade ought to be safe -- but please check the 
> release notes carefully for extra steps/considerations you may need to make, 
> depending on your deployment.  I think you'll probably want to upgrade your 
> 3.0 systems in place as far forward as you can (while staying 3.0), and then 
> use the replication strategy to upgrade to 3.2 after that.

I just do that. My test server is now using 3.0.14 (self build debian package) :

>1601118406>APPLY MAILBOX %(UNIQUEID m8lfz12835tr5y3dfucebk95 MBOXNAME 
>user.testes2 SYNC_CRC 2393559122 SYNC_CRC_ANNOT 4164967045 LAST_UID 2 
>HIGHESTMODSEQ 4 RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE 1601117744 
>POP3_LAST_LOGIN 0 POP3_SHOW_AFTER 0 XCONVMODSEQ 4 UIDVALIDITY 1601117689 
>PARTITION default ACL "testes2lrswipkxtecdan  " OPTIONS P RECORD 
>(%(UID 1 MODSEQ 4 LAST_UPDATED 1601118406 FLAGS (\Expunged) INTERNALDATE 
>1601117744 SIZE 2890 GUID 6f160d7026f4adbaebb2d0941c6398272a8692da ANNOTATIONS 
>(%(ENTRY /vendor/cmu/cyrus-imapd/thrid USERID NIL VALUE fee5d4912d9da6a8))) 
>%(UID 2 MODSEQ 3 LAST_UPDATED 1601118406 FLAGS () INTERNALDATE 1601117744 SIZE 
>2890 GUID 6f160d7026f4adbaebb2d0941c6398272a8692da ANNOTATIONS (%(ENTRY 
>/vendor/cmu/cyrus-imapd/thrid USERID NIL VALUE fee5d4912d9da6a8)
<16011184061601118406>EXIT
<1601118406http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-24 Thread ellie timoney
On Thu, 24 Sep 2020, at 1:44 AM, Jean Charles Delépine wrote:
> Is this a known problem corrected after 3.0.9 ?

Off the top of my head I no longer remember, but the current release in the 3.0 
series is 3.0.14.  I'd suggest, if you haven't already, that you look in the 
release notes from 3.0.8-3.0.14 and see if anything looks relevant.  They're 
here: https://www.cyrusimap.org/imap/download/release-notes/3.0/index.html

We don't generally recommend in-place upgrades between series (so, upgrading 
in-place from 3.0.8 to 3.2.3 is not something we'd recommend).  Within-series, 
an in-place upgrade ought to be safe -- but please check the release notes 
carefully for extra steps/considerations you may need to make, depending on 
your deployment.  I think you'll probably want to upgrade your 3.0 systems in 
place as far forward as you can (while staying 3.0), and then use the 
replication strategy to upgrade to 3.2 after that.
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-24 Thread Jean Charles Delépine
Jean Charles Delépine  écrivait (wrote) :

> Hello,
> 
> I'm on the way to migrate one quite big murder config with Cyrus IMAP
> 3.0.8-Debian-3.0.8-6+deb10u4
> to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1.
> 
> My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has work
> before for 2.5
> to 3.0. migration.
> 
> Il can replicate empty mailboxes. So the conf (attached) seems ok.
> But if the mailbox isn't empty here's the result (completes traces attached) :

The conf might not be _that_ ok.

If the option conversation is set to 1 and I create new mailboxes, ond
send mails to this mailboxes, no sync of these mailboxes is possible.

If I remove the option 'conversation: 1', any new populated mailboxes can 
be sync.

So, the problem seems to be in this option.

But even after removing "conversation: 1" and zeroed conversation indexes
(ctl_conversationsdb -z), the old "bad" mailboxes can't be synced. Even
removing their .conversations and .counters files doesn't help.

Can I, and how can I,  get rid of those conversation indexes in order to have
my mailboxes being "as if there never been conversation" ?

Sincerly,
  Jean charles Delépine

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Replication : IMAP_PROTOCOL_ERROR Protocol error

2020-09-23 Thread Jean Charles Delépine

Hello,

I'm on the way to migrate one quite big murder config with Cyrus IMAP  
3.0.8-Debian-3.0.8-6+deb10u4

to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1.

My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has  
work before for 2.5

to 3.0. migration.

Il can replicate empty mailboxes. So the conf (attached) seems ok.
But if the mailbox isn't empty here's the result (completes traces attached) :

cyrus/sync_client[3082351]: MAILBOX received NO response:  
IMAP_PROTOCOL_ERROR Protocol error

cyrus/sync_client[3082351]: do_folders(): update failed: user.t 'Bad protocol'
cyrus/sync_client[3082351]: IOERROR: do_user_main: Bad protocol for t  
to [no channel] (test-3.2.3)

Error from sync_do_user(t): bailing out!
cyrus/sync_client[3082351]: Error in sync_do_user(t): bailing out!

The destination server says :
   SYNCERROR: failed to parse uploaded record


If I upgrade this test server to 3.2.3-Debian-3.2.3-1~bpo10+1 with  
the same configuration, the same mailbox

can be sync whith the problem being corrected :


APPLY MAILBOX %(UNIQUEID 6kjf8ro4032wfjefcdewaqyl MBOXNAME user.t  
SYNC_CRC 3435668400 SYNC_CRC_ANNOT 1359939586 LAST_UID 1 HIGHESTMODSEQ  
3 RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE 0 POP3_LAST_LOGIN 0  
POP3_SHOW_AFTER 0 XCONVMODSEQ 3 UIDVALIDITY 1600855436 PARTITION  
default ACL "" OPTIONS P FOLDERMODSEQ 3 SINCE_MODSEQ 3 SINCE_CRC 0  
SINCE_CRC_ANNOT 12345678 RECORD ())

<1600872848cyrus/sync_client[3131948]: MAILBOX received NO response:  
IMAP_SYNC_CHECKSUM Checksum Failure
cyrus/sync_client[3131948]: SYNC_NOTICE: CRC failure on sync user.t,  
recalculating counts and trying again


Error but retry.

MAILBOX user.t
1600872848>APPLY MAILBOX %(UNIQUEID 6kjf8ro4032wfjefcdewaqyl  
MBOXNAME user.t SYNC_CRC 3435668400 SYNC_CRC_ANNOT 1370712396  
LAST_UID 1 HIGHESTMODSEQ 3 RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE  
0 POP3_LAST_LOGIN 0 POP3_SHOW_AFTER 0 XCONVMODSEQ 3 UIDVALIDITY  
1600855436 PARTITION default ACL "" OPTIONS P FOLDERMODSEQ 3 RECORD  
())

<1600872848cyrus/sync_client[3131948]: MAILBOX received NO response:  
IMAP_SYNC_CHECKSUM Checksum Failure

cyrus/sync_client[3131948]: CRC failure on sync for user.t, trying full update

Error then full update

FULLMAILBOX user.t

1600872848>GET FULLMAILBOX user.t
<1600872848<* MAILBOX %(UNIQUEID 6kjf8ro4032wfjefcdewaqyl MBOXNAME  
user.t SYNC_CRC 0 SYNC_CRC_ANNOT 12345678 LAST_UID 1 HIGHESTMODSEQ 3  
RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE 0 POP3_LAST_LOGIN 0  
POP3_SHOW_AFTER 0 XCONVMODSEQ 3 UIDVALIDITY 1600855436 PARTITION  
default ACL "" OPTIONS P FOLDERMODSEQ 3 RECORD ())

OK success
cyrus/sync_client[3131948]: user.t: same message appears twice 1 2
MAILBOX user.t
1600872849>APPLY MESSAGE (%{default  
f0eeef2cc42f23884089760cf5de1961b358a498 2627}



)
<16008728491600872849>APPLY MAILBOX %(UNIQUEID 6kjf8ro4032wfjefcdewaqyl  
MBOXNAME user.t SYNC_CRC 1009437458 SYNC_CRC_ANNOT 455745080  
LAST_UID 2 HIGHESTMODSEQ 5 RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE  
0 POP3_LAST_LOGIN 0 POP3_SHOW_AFTER 0 XCONVMODSEQ 5 UIDVALIDITY  
1600855436 PARTITION default ACL "" OPTIONS P FOLDERMODSEQ 3 RECORD  
(%(UID 1 MODSEQ 5 LAST_UPDATED 1600872848 FLAGS (\Expunged)  
INTERNALDATE 1600855652 SIZE 2627 GUID  
f0eeef2cc42f23884089760cf5de1961b358a498 ANNOTATIONS (%(ENTRY  
/vendor/cmu/cyrus-imapd/thrid USERID "" MODSEQ 0 VALUE  
611d36cdcf46289a))) %(UID 2 MODSEQ 4 LAST_UPDATED 1600872848 FLAGS  
() INTERNALDATE 1600855652 SIZE 2627 GUID  
f0eeef2cc42f23884089760cf5de1961b358a498 ANNOTATIONS (%(ENTRY  
/vendor/cmu/cyrus-imapd/thrid USERID "" MODSEQ 0 VALUE  
611d36cdcf46289a)

<16008728491600872849>APPLY MAILBOX %(UNIQUEID nckzm4o710aom3c22o9ugy6a  
MBOXNAME user.t.Drafts SYNC_CRC 0 SYNC_CRC_ANNOT 0 LAST_UID 0  
HIGHESTMODSEQ 2 RECENTUID 0 RECENTTIME 0 LAST_APPENDDATE 0  
POP3_LAST_LOGIN 0 POP3_SHOW_AFTER 0 XCONVMODSEQ 2 UIDVALIDITY  
160087 PARTITION default ACL "" OPTIONS P FOLDERMODSEQ 2 RECORD  
())

<1600872849
1600872849>EXIT

<1600872849Is there something I can do to successfully replicate my backends to  
my new servers ?


Sincerly,
Jean Charles Delépine
configdirectory: /var/lib/cyrus
proc_path: /run/cyrus/proc
mboxname_lockpath: /run/cyrus/lock
defaultpartition: default
partition-default: /var/spool/cyrus/mail
partition-news: /var/spool/cyrus/news
newsspool: /var/spool/news
altnamespace: no
postuser: share-post
unixhierarchysep: no
lmtp_downcase_rcpt: yes
allowanonymouslogin: no
popminpoll: 1
autocreate_post: yes
autocreate_quota: 0
umask: 077
sieveusehomedir: false
sievedir: /var/spool/sieve
httpmodules: caldav carddav
hashimapspool: true
sasl_auto_transition: no
tls_client_ca_dir:
tls_session_timeout:
tls_versions:
lmtpsocket: /var/run/cyrus/socket/lmtp
idlesocket: /var/run/cyrus/socket/idle
notifysocket: /var/run/cyrus/socket/notify
syslog_prefix: cyrus
admins: mailadmin
metapartition_files: header index cache expunge squat annotations lock dav 
archivecache
metapartition-default: /var/lib/cyrus/metas
search_engine: xapian

Question about replication, split-brains and unclean failover

2020-06-17 Thread ego...@sarenet.es
Hi!,


I was writing some code for automating the server fail-over and was trying to 
see, how should or could I handle not run log files from sync_client. Well when 
in a clean shutdown, it’s pretty easy to know how to manage because the 
replication is up-to-date… so almost no problem there, it’s pretty fast... The 
problem comes in an unclean shutdown where some delay exists.

I sometimes suffer about unclean shutdowns, the way are described here 
https://www.fastmail.com/help/technical/architecture.html "Unclean failover”. 
There sais too, some improvements where going to be done (or that were 
committed perhaps) to the replication in order to avoid them. But, by the way 
replication is handled (as I have seen in the source code about locks, modseq 
checks and so) and the way Cyrus writes replication logs for rolling 
replication later, the possibility of replying the logs from an actual slave 
(just failed over), for covering a split-brain I assume is not carried out at 
least nowadays?. Perhaps am I wrong?. I ask this just, for confirming and avoid 
having wrong ideas… 

If it’s undone or at least partially undone, I would love really doing 
something… although unfortunately due to our high work load... I can’t say when 
I could have some time for it… but I’ll try my bests...



Cheers!

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication and Deleted Files

2020-06-04 Thread Ian Batten via Info-cyrus



On Thu 04 Jun 2020 at 18:57:37, Michael Menge 
(michael.me...@zdv.uni-tuebingen.de) wrote:


>

you also need to run cyr_expire on the "new_server" to remove the old
expunged mails and deleted folders.


Obvious when you try it!    Thanks so much.  

Expired 23 and expunged 7617 out of 289060 messages from 268 mailboxes

For some reason I had decided that you only ran cyr_expire on the master, and I 
was quite emphatic about it some years ago:

  # expire old stuff: dups 7 days, keep deletions for 3 days
  # XXX XXX XXX expire does not run on replica, does run on master XXX XXX XXX
  # expire        cmd="cyr_expire -E 7 -X 3 -D 3" at=0100

Thank you again,.,.I shall be back in another 25 years with another query :-)

ian

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication and Deleted Files

2020-06-04 Thread Michael Menge

Hi,

Quoting Ian Batten via Info-cyrus :


Hi, long-time Cyrus user (25 years, I think), but stumped on this one…

I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am  
trying to migrate off.  The strategy is to run rolling replication  
onto the new server (3.0.8-6+deb10u4 on Debian 10.4), and then point  
the DNS record at the new server.  With Covid, this has become more  
protracted than I would like, as I don’t want to accidentally mess  
up users who are isolating, so the replication has been running for  
some weeks.


The replication structure is old-server -> new-server -> (backup1,  
backup2) where backup1 and backup2 are configured as separate  
channels on new-server.  This has been running seemingly correctly  
for about three months now.


Today I decided to check all was well by using rsync -an to confirm  
that the replicas have everything that is on the master.  They do,  
in that using 


rsync -anvO --size-only  --exclude='cyrus.*'  
root@mail:/var/imap/partition1/user/ /var/imap/partition1/user 


where “mail” is the old server shows that there are no messages  
missing (—size-only because there’s some time slew in a few places,  
usually only of a few seconds, but up to a day in others).


However, reversing it:

rsync -anvO --size-only  --exclude='cyrus.*'  
/var/imap/partition1/user/ root@mail:/var/imap/partition1/user


Shows that there are a _lot_ of files on the replicas which are not  
on the master, some of them relating to recent deletions, but some  
of them seemingly quite old.  I am using:


delete_mode: delayed
expunge_mode: delayed

everywhere, running cyr_expire on the master but not on the  
replicas.  I have enough bandwidth that sync_reset and re-sync is  
realistic, but I’d rather not have to do that immediately prior to a  
cut-over.   These old files are a worry because if I ever had to  
reconstruct one of the mailboxes, presumably the deleted (I think)  
messages would all reappear.  Does anyone have any suggestions?




you also need to run cyr_expire on the "new_server" to remove the old
expunged mails and deleted folders.

I haven't used replication for backup but I suspect that cyr_expire is
also required your backup servers






M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Replication and Deleted Files

2020-06-04 Thread Ian Batten via Info-cyrus


Hi, long-time Cyrus user (25 years, I think), but stumped on this one…

I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am trying to 
migrate off.  The strategy is to run rolling replication onto the new server 
(3.0.8-6+deb10u4 on Debian 10.4), and then point the DNS record at the new 
server.  With Covid, this has become more protracted than I would like, as I 
don’t want to accidentally mess up users who are isolating, so the replication 
has been running for some weeks.

The replication structure is old-server -> new-server -> (backup1, backup2) 
where backup1 and backup2 are configured as separate channels on new-server.  
This has been running seemingly correctly for about three months now.

Today I decided to check all was well by using rsync -an to confirm that the 
replicas have everything that is on the master.  They do, in that using 

rsync -anvO --size-only  --exclude='cyrus.*' 
root@mail:/var/imap/partition1/user/ /var/imap/partition1/user 

where “mail” is the old server shows that there are no messages missing 
(—size-only because there’s some time slew in a few places, usually only of a 
few seconds, but up to a day in others).

However, reversing it:

rsync -anvO --size-only  --exclude='cyrus.*' /var/imap/partition1/user/ 
root@mail:/var/imap/partition1/user

Shows that there are a _lot_ of files on the replicas which are not on the 
master, some of them relating to recent deletions, but some of them seemingly 
quite old.  I am using:

delete_mode: delayed
expunge_mode: delayed

everywhere, running cyr_expire on the master but not on the replicas.  I have 
enough bandwidth that sync_reset and re-sync is realistic, but I’d rather not 
have to do that immediately prior to a cut-over.   These old files are a worry 
because if I ever had to reconstruct one of the mailboxes, presumably the 
deleted (I think) messages would all reappear.  Does anyone have any 
suggestions?

Thanks

ian
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-29 Thread Olaf Frączyk



On 2020-04-29 16:47, Andrzej Kwiatkowski wrote:

Ok.

I was asking because of problem with low entropy on VM-s causing
performance issues with big installations.
I didn't have this issue, however as I said, my installation is really 
small. If I needed more entropy I would think of using a hardware 
entropy generator. Eg. on a PCI-e card. They aren't super cheap but also 
not too expensive. I think someone with good cryptography knowledge 
could help you with this topic. I suppose the storage I/O can be a 
bigger issue here.


I've also noticed that replication documentation is a bit tricky,
and not every procedure/case is well documented.
Yes, me too ;) But if you hit a bigger problem, you have this list :) I 
tried on a copy without active users to be safe.


Regards
AK


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication failed 3.0.5 -> 3.0.13

2020-04-29 Thread Andrzej Kwiatkowski
W dniu 22.04.2020 o 10:19, Olaf Frączyk pisze:
> On 2020-04-22 09:16, Andrzej Kwiatkowski wrote:
>> W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:
>>> Hi,
>>>
>>> I'm running 3.0.5.
>>>
>>> I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.
>>>
>>> The replication started but it didn't transfer all mails.
>>>
>>> The store isn't big 44GB, transferred was about 24 GB.
>>>
>>> In the logs I see:
>>>
>> Olaf,
>>
>> Do you install 3.0.13 on vm or bare-metal ?
>>
>> Regards
>> AK
> 
> I have it installed in VM - CentOS 7, now 8. This setup worked OK for 
> many years. But I have only a few users, just with big mailboxes.
> 
> On SAS drives it works fine. The only longlock I saw was during the 
> replication - I suppose transferring 20.000 big messages at once per 
> mailbox could cause it. But in normal usage I don't see it.
> 
> The cause of my problem was my ignorance regarding how rolling 
> replication works in cyrus. It works OK now :)
> 
> Regards,
> 
> Olaf
> 
Ok.

I was asking because of problem with low entropy on VM-s causing
performance issues with big installations.

I've also noticed that replication documentation is a bit tricky,
and not every procedure/case is well documented.

Regards
AK

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-22 Thread Olaf Frączyk

On 2020-04-22 09:16, Andrzej Kwiatkowski wrote:

W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:

Hi,

I'm running 3.0.5.

I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.

The replication started but it didn't transfer all mails.

The store isn't big 44GB, transferred was about 24 GB.

In the logs I see:


Olaf,

Do you install 3.0.13 on vm or bare-metal ?

Regards
AK


I have it installed in VM - CentOS 7, now 8. This setup worked OK for 
many years. But I have only a few users, just with big mailboxes.


On SAS drives it works fine. The only longlock I saw was during the 
replication - I suppose transferring 20.000 big messages at once per 
mailbox could cause it. But in normal usage I don't see it.


The cause of my problem was my ignorance regarding how rolling 
replication works in cyrus. It works OK now :)


Regards,

Olaf


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-22 Thread Olaf Frączyk

Yes, Michael was right - it works properly now. (I hope ;).

OK. I'll put it in the DAEMON section - this way I have only one point 
where all stuff related to imap is started.


Thank you for explanation. Regards,

Olaf

On 2020-04-22 02:36, ellie timoney wrote:

I think Michael's got this pretty much covered -- you need to disable the 
rolling replication for now, and then use sync_client -u (or if you're brave, 
sync_client -A) to get an initial sync of everything.  These two options work 
entire-user-at-a-time, so they should detect and fix the problems introduced by 
the partial rolling sync.

If you have mailboxes in a shared namespace (i.e. that are outside the user/ 
namespace), they won't be replicated by -u or -A.  You'll need to initially 
replicate those individually with sync_client -m.

Once you've got a complete initial sync done, you can use rolling replication 
to keep the replica up to date.  You can put the rolling 'sync_client -r' in 
the DAEMON section, so that Cyrus will restart it if it exits.  Or you could 
manage it from outside Cyrus, e.g. via systemd/initd if you prefer.

You cannot put sync_client in the SERVICES section.  The SERVICES section is 
for service processes (i.e. processes that listen on a socket and service 
client requests).  sync_client is a client, not a service.

Cheers,

ellie

On Wed, Apr 22, 2020, at 4:40 AM, Michael Menge wrote:

Quoting Olaf Frączyk :


Yes, at the beginning I was also thinking if initial sync is
necessary, but there was nothing in docs about it, something started
replicating and I simply assumed it does initial resync. I'll try it
this evening. :)

Since you use replication - are sieve scripts replicated as well?
There is -s option called sieve mode but it needs to specify which
users' files are to replicate and there is written that it is mostly
for debugging.


Yes, sieve scripts are replicated.

The way the rolling replication works is, that every time something is changed
on the master a "hint" is written in the sync log,

"MAILBOX user.foo.bar" indicates that the mailbox bar of the user foo
has changed
and the sync_client will sync this (and only this folder)
There are other "hints" e.g for changed subscription or changed sieve script.

But if the  sieve script is not changed sync_client in rolling replication
will not try to sync it. Using the -A or -u Option will sync the all/some
users, including all mailboxes, folder subscriptions and sieve scripts.

The -s option is only needed if you change a compiled sieve script so
that it is not logged in the replication log.





M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:
michael.me...@zdv.uni-tuebingen.de
Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-22 Thread Olaf Frączyk

On 2020-04-21 20:40, Michael Menge wrote:


Quoting Olaf Frączyk :



Yes, at the beginning I was also thinking if initial sync is 
necessary, but there was nothing in docs about it, something started 
replicating and I simply assumed it does initial resync. I'll try it 
this evening. :)


Since you use replication - are sieve scripts replicated as well? 
There is -s option called sieve mode but it needs to specify which 
users' files are to replicate and there is written that it is mostly 
for debugging.




Yes, sieve scripts are replicated.

The way the rolling replication works is, that every time something is 
changed

on the master a "hint" is written in the sync log,

"MAILBOX user.foo.bar" indicates that the mailbox bar of the user foo 
has changed

and the sync_client will sync this (and only this folder)
There are other "hints" e.g for changed subscription or changed sieve 
script.


But if the  sieve script is not changed sync_client in rolling 
replication

will not try to sync it. Using the -A or -u Option will sync the all/some
users, including all mailboxes, folder subscriptions and sieve scripts.

The -s option is only needed if you change a compiled sieve script so
that it is not logged in the replication log.


I did replication with -A. It looks that everything works properly now. 
Sieve scripts were also transferred :)


It is good to know, that if I change compiled script manually it needs 
additional attention.


Thank you very much for help :)







 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13

2020-04-22 Thread Andrzej Kwiatkowski
W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:
> Hi,
> 
> I'm running 3.0.5.
> 
> I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.
> 
> The replication started but it didn't transfer all mails.
> 
> The store isn't big 44GB, transferred was about 24 GB.
> 
> In the logs I see:
> 
Olaf,

Do you install 3.0.13 on vm or bare-metal ?

Regards
AK

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread ellie timoney
I think Michael's got this pretty much covered -- you need to disable the 
rolling replication for now, and then use sync_client -u (or if you're brave, 
sync_client -A) to get an initial sync of everything.  These two options work 
entire-user-at-a-time, so they should detect and fix the problems introduced by 
the partial rolling sync.  

If you have mailboxes in a shared namespace (i.e. that are outside the user/ 
namespace), they won't be replicated by -u or -A.  You'll need to initially 
replicate those individually with sync_client -m.

Once you've got a complete initial sync done, you can use rolling replication 
to keep the replica up to date.  You can put the rolling 'sync_client -r' in 
the DAEMON section, so that Cyrus will restart it if it exits.  Or you could 
manage it from outside Cyrus, e.g. via systemd/initd if you prefer.

You cannot put sync_client in the SERVICES section.  The SERVICES section is 
for service processes (i.e. processes that listen on a socket and service 
client requests).  sync_client is a client, not a service.

Cheers,

ellie

On Wed, Apr 22, 2020, at 4:40 AM, Michael Menge wrote:
> 
> Quoting Olaf Frączyk :
> 
> >
> > Yes, at the beginning I was also thinking if initial sync is  
> > necessary, but there was nothing in docs about it, something started  
> > replicating and I simply assumed it does initial resync. I'll try it  
> > this evening. :)
> >
> > Since you use replication - are sieve scripts replicated as well?  
> > There is -s option called sieve mode but it needs to specify which  
> > users' files are to replicate and there is written that it is mostly  
> > for debugging.
> >
> 
> Yes, sieve scripts are replicated.
> 
> The way the rolling replication works is, that every time something is changed
> on the master a "hint" is written in the sync log,
> 
> "MAILBOX user.foo.bar" indicates that the mailbox bar of the user foo  
> has changed
> and the sync_client will sync this (and only this folder)
> There are other "hints" e.g for changed subscription or changed sieve script.
> 
> But if the  sieve script is not changed sync_client in rolling replication
> will not try to sync it. Using the -A or -u Option will sync the all/some
> users, including all mailboxes, folder subscriptions and sieve scripts.
> 
> The -s option is only needed if you change a compiled sieve script so
> that it is not logged in the replication log.
> 
> 
> 
> 
> 
> M.MengeTel.: (49) 7071/29-70316
> Universität Tübingen   Fax.: (49) 7071/29-5912
> Zentrum für Datenverarbeitung  mail:  
> michael.me...@zdv.uni-tuebingen.de
> Wächterstraße 76
> 72074 Tübingen
> 
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge


Quoting Olaf Frączyk :



Yes, at the beginning I was also thinking if initial sync is  
necessary, but there was nothing in docs about it, something started  
replicating and I simply assumed it does initial resync. I'll try it  
this evening. :)


Since you use replication - are sieve scripts replicated as well?  
There is -s option called sieve mode but it needs to specify which  
users' files are to replicate and there is written that it is mostly  
for debugging.




Yes, sieve scripts are replicated.

The way the rolling replication works is, that every time something is changed
on the master a "hint" is written in the sync log,

"MAILBOX user.foo.bar" indicates that the mailbox bar of the user foo  
has changed

and the sync_client will sync this (and only this folder)
There are other "hints" e.g for changed subscription or changed sieve script.

But if the  sieve script is not changed sync_client in rolling replication
will not try to sync it. Using the -A or -u Option will sync the all/some
users, including all mailboxes, folder subscriptions and sieve scripts.

The -s option is only needed if you change a compiled sieve script so
that it is not logged in the replication log.





M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk


On 2020-04-21 16:00, Michael Menge wrote:

Hi,

Quoting Olaf Frączyk :

I managed to get strace on both sides, however it doesn't make me 
wiser - there is nothing obvious for me.


Additionally I see that replication works more or less for new 
messages, but older are not processed.


I have several subfolders in my mailbox, some of them unreplicated. 
If I change anything in the subfolder now - the folder is replicated, 
but other subfolders remain not replicated - unless I change anything 
in them.


you are trying to use rolling replication.
For rolling replication cyrus logs which the location and type where 
changes ocure
e.g "MAILBOX navi.pl!user.olaf" indicates that something has changed 
in the INBOX of

the user o...@navi.pl.

the rolling replication will keep the master and replica in sync, but 
it requires

that there was an initial replication of all users.

you can use sync_client with -A oder -u to do this synchronization 
(see the manpage for details)


I think using rolling replication without the initial sync may be the 
cause of the errors.

stop the "sync_client -r" and wait for the initial sync to finish.


Yes, at the beginning I was also thinking if initial sync is necessary, 
but there was nothing in docs about it, something started replicating 
and I simply assumed it does initial resync. I'll try it this evening. :)


Since you use replication - are sieve scripts replicated as well? There 
is -s option called sieve mode but it needs to specify which users' 
files are to replicate and there is written that it is mostly for debugging.







 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge

Hi,

Quoting Olaf Frączyk :

I managed to get strace on both sides, however it doesn't make me  
wiser - there is nothing obvious for me.


Additionally I see that replication works more or less for new  
messages, but older are not processed.


I have several subfolders in my mailbox, some of them unreplicated.  
If I change anything in the subfolder now - the folder is  
replicated, but other subfolders remain not replicated - unless I  
change anything in them.


you are trying to use rolling replication.
For rolling replication cyrus logs which the location and type where  
changes ocure
e.g "MAILBOX navi.pl!user.olaf" indicates that something has changed  
in the INBOX of

the user o...@navi.pl.

the rolling replication will keep the master and replica in sync, but  
it requires

that there was an initial replication of all users.

you can use sync_client with -A oder -u to do this synchronization  
(see the manpage for details)


I think using rolling replication without the initial sync may be the  
cause of the errors.

stop the "sync_client -r" and wait for the initial sync to finish.




M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk
I managed to get strace on both sides, however it doesn't make me wiser 
- there is nothing obvious for me.


Additionally I see that replication works more or less for new messages, 
but older are not processed.


I have several subfolders in my mailbox, some of them unreplicated. If I 
change anything in the subfolder now - the folder is replicated, but 
other subfolders remain not replicated - unless I change anything in them.


Below strace, maybe someone can find anything meaningful from it:

1: On replica

rename("/var/lib/imap/proc/6794.new", "/var/lib/imap/proc/6794") = 0
getpid()    = 6794
openat(AT_FDCWD, "/var/lib/imap/proc/6794.new", O_RDWR|O_CREAT|O_TRUNC, 
0666) = 12

fstat(12, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
write(12, "imap\tifs.local.navi.pl [192.168."..., 53) = 53
close(12)   = 0
rename("/var/lib/imap/proc/6794.new", "/var/lib/imap/proc/6794") = 0
rt_sigprocmask(SIG_BLOCK, [INT QUIT ALRM TERM CHLD], [], 8) = 0
pselect6(1, [0], NULL, NULL, {tv_sec=1920, tv_nsec=0}, {[], 8}) = 1 (in 
[0], left {tv_sec=1919, tv_nsec=99238})

rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [INT QUIT ALRM TERM CHLD], [], 8) = 0
pselect6(1, [0], NULL, NULL, {tv_sec=1920, tv_nsec=0}, {[], 8}) = 1 (in 
[0], left {tv_sec=1919, tv_nsec=99439})

rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
read(0, "", 5)  = 0
unlink("/var/lib/imap/proc/6794")   = 0
openat(AT_FDCWD, "/dev/null", O_RDWR)   = 12
shutdown(0, SHUT_RD)    = -1 ENOTCONN (Transport 
endpoint is not connected)

dup2(12, 0) = 0
shutdown(1, SHUT_RD)    = -1 ENOTCONN (Transport 
endpoint is not connected)

dup2(12, 1) = 1
shutdown(2, SHUT_RD)    = -1 ENOTCONN (Transport 
endpoint is not connected)

dup2(12, 2) = 2
close(12)   = 0
close(11)   = 0
getpid()    = 6794
write(3, "\1\0\0\0\212\32\0\0", 8)  = 8
rt_sigaction(SIGALRM, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, 
sa_restorer=0x7f57dc71a960}, NULL, 8) = 0
rt_sigaction(SIGQUIT, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, 
sa_restorer=0x7f57dc71a960}, NULL, 8) = 0
rt_sigaction(SIGINT, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, 
sa_restorer=0x7f57dc71a960}, NULL, 8) = 0
rt_sigaction(SIGTERM, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, 
sa_restorer=0x7f57dc71a960}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, 
sa_restorer=0x7f57dc71a960}, NULL, 8) = 0
rt_sigaction(SIGHUP, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0x7f57dc71a960}, 
NULL, 8) = 0

alarm(83)   = 0
fcntl(8, F_SETLKW, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=0, 
l_len=0}) = 0
stat("/usr/local/cyrus-3.0.13/libexec/imapd", {st_mode=S_IFREG|0755, 
st_size=1181120, ...}) = 0
rt_sigaction(SIGHUP, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_SIGINFO, sa_restorer=0x7f57dc71a960}, NULL, 8) = 0

rt_sigprocmask(SIG_BLOCK, [INT QUIT ALRM TERM CHLD], [], 8) = 0
pselect6(5, [4], NULL, NULL, NULL, {[], 8}) = ? ERESTARTNOHAND (To be 
restarted if no handler)

--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
rt_sigreturn({mask=[INT QUIT ALRM TERM CHLD]}) = -1 EINTR (Interrupted 
system call)

rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigaction(SIGHUP, {sa_handler=0x7f57e06da710, sa_mask=[], 
sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0x7f57dc71a960}, 
NULL, 8) = 0
fcntl(8, F_SETLKW, {l_type=F_UNLCK, l_whence=SEEK_SET, l_start=0, 
l_len=0}) = 0

close(5)    = 0
munmap(0x7f57e14c1000, 16384)   = 0
close(7)    = 0
munmap(0x7f57e14bd000, 16384)   = 0
close(6)    = 0
unlink("/var/lib/imap/socket/idle.6794") = 0
close(9)    = 0
munmap(0x7f57e14a9000, 16384)   = 0
exit_group(0)   = ?
+++ exited with 0 +++


2. On master:

nanosleep({1, 0}, 0x7ffe12318060)   = 0
stat("/var/local/imapd_sync_stop", 0x7ffe12318280) = -1 ENOENT (No such 
file or directory)
stat("/var/lib/imap/sync/log-run", 0x7ffe12318190) = -1 ENOENT (No such 
file or directory)

stat("/var/lib/imap/sync/log", {st_mode=S_IFREG|0600, st_size=26, ...}) = 0
rename("/var/lib/imap/sync/log", "/var/lib/imap/sync/log-run") = 0
open("/var/lib/imap/sync/log-run", O

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge


Quoting Olaf Frączyk :


Thank you for the telemetry hint :)

I don't use the syncserver - the replication is done via IMAP port  
on the replica side. I have no idea how to have strace spawned by  
cyrus master process. When I attach later to imapd using strace -p  
I'm afraid some info already will be lost.




You can try "/usr/bin/strace  /imapd  
" as cmd

or you can use prefork=1 in the service to prefok one imap process and connect
to this one before you start the sync_client.



The syncserver is marked as deprecated in the docs, so I went with  
the more modern option. Maybe here is the problem ;)




i am using syncserver since cyrus-imapd 2.3. So I had no reason to change it.


The funny thing is that from time to time the replication progresses  
a little. I don't like non-repetitive behavior ;)


Thanks again for the hints.

On 2020-04-21 14:13, Michael Menge wrote:

Hi,

Quoting Olaf Frączyk :


The current situation is:

1. Replica:

stopped and started the replica

no activity on replica - iotop and top show nothing

the only messages on replica is incoming connection from master

2. Master:

when I run sync_client -r I still get:

Apr 21 12:38:36 ifs sync_client[29518]: Reprocessing sync log file  
/var/lib/imap/sync/log-run
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: Error in do_sync():  
bailing out! Bad protocol
Apr 21 12:43:27 ifs sync_client[29518]: Processing sync log file  
/var/lib/imap/sync/log-run failed: Bad protocol


3. There is 27 GB of about 45 GB replicated and there is no  
further progress


4. How to find out why the replica doesn't respond?



You can enable telemetry logging on the replica by creating a  
folder /var/lib/imap/log/
where  is the value of the  sync_authname. If you  
give cyrus write permissions for this folder
it will create log-files for each process and what it received and  
send from/to the sync_client with timestamps.


Also you can try to use strace on the syncserver process to figure  
out which files it is accessing




On 2020-04-21 11:18, Olaf Frączyk wrote:

I also found out that when I see on master:

Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: Error in do_sync():  
bailing out! Bad protocol
Apr 21 11:12:38 ifs sync_client[27996]: Processing sync log file  
/var/lib/imap/sync/log-run failed: Bad protocol


I also get on the replica:

Apr 21 11:12:38 skink1 imap[5775]: Connection reset by peer,  
closing connection


But I also see, that before it happens, there is no activity both  
on the replica and on the master for some time.


So maybe the imap server process is not recovering correctly in  
the longlock situation?


On 2020-04-21 11:07, Olaf Frączyk wrote:

Hi,

When I run sync_client -r on the master I see the following on  
the replica:


Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.olaf for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.piotr for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.olaf for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.piotr for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in  
subfolders. The inbox folders have about 20,000 messages, the  
subfolders upto 15,000


Could it cause problems?


the longlock is normaly not the problem. While on process has the lock
no other process can write to the mailbox, but on the replica there normaly
is no other process that should access the mailbox



Maybe I should move the sync_client from START section to  
SERVICES, it seems that it is not automatically restarted



I havend tried starting sync_client in the service section.
I start the sync_client via systemd.


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync():  
bailing out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of 

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk

Thank you for the telemetry hint :)

I don't use the syncserver - the replication is done via IMAP port on 
the replica side. I have no idea how to have strace spawned by cyrus 
master process. When I attach later to imapd using strace -p I'm afraid 
some info already will be lost.


The syncserver is marked as deprecated in the docs, so I went with the 
more modern option. Maybe here is the problem ;)


The funny thing is that from time to time the replication progresses a 
little. I don't like non-repetitive behavior ;)


Thanks again for the hints.

On 2020-04-21 14:13, Michael Menge wrote:

Hi,

Quoting Olaf Frączyk :


The current situation is:

1. Replica:

stopped and started the replica

no activity on replica - iotop and top show nothing

the only messages on replica is incoming connection from master

2. Master:

when I run sync_client -r I still get:

Apr 21 12:38:36 ifs sync_client[29518]: Reprocessing sync log file 
/var/lib/imap/sync/log-run
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length response 
to MAILBOXES (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length response 
to RESTART (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: Error in do_sync(): bailing 
out! Bad protocol
Apr 21 12:43:27 ifs sync_client[29518]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


3. There is 27 GB of about 45 GB replicated and there is no further 
progress


4. How to find out why the replica doesn't respond?



You can enable telemetry logging on the replica by creating a folder 
/var/lib/imap/log/
where  is the value of the  sync_authname. If you give 
cyrus write permissions for this folder
it will create log-files for each process and what it received and 
send from/to the sync_client with timestamps.


Also you can try to use strace on the syncserver process to figure out 
which files it is accessing




On 2020-04-21 11:18, Olaf Frączyk wrote:

I also found out that when I see on master:

Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length 
response to MAILBOXES (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length 
response to RESTART (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: Error in do_sync(): bailing 
out! Bad protocol
Apr 21 11:12:38 ifs sync_client[27996]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


I also get on the replica:

Apr 21 11:12:38 skink1 imap[5775]: Connection reset by peer, closing 
connection


But I also see, that before it happens, there is no activity both on 
the replica and on the master for some time.


So maybe the imap server process is not recovering correctly in the 
longlock situation?


On 2020-04-21 11:07, Olaf Frączyk wrote:

Hi,

When I run sync_client -r on the master I see the following on the 
replica:


Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.olaf for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.olaf for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in 
subfolders. The inbox folders have about 20,000 messages, the 
subfolders upto 15,000


Could it cause problems?


the longlock is normaly not the problem. While on process has the lock
no other process can write to the mailbox, but on the replica there 
normaly

is no other process that should access the mailbox



Maybe I should move the sync_client from START section to SERVICES, 
it seems that it is not automatically restarted



I havend tried starting sync_client in the service section.
I start the sync_client via systemd.


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): 
bailing out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.



 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge

Hi,

Quoting Olaf Frączyk :


The current situation is:

1. Replica:

stopped and started the replica

no activity on replica - iotop and top show nothing

the only messages on replica is incoming connection from master

2. Master:

when I run sync_client -r I still get:

Apr 21 12:38:36 ifs sync_client[29518]: Reprocessing sync log file  
/var/lib/imap/sync/log-run
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: Error in do_sync(): bailing  
out! Bad protocol
Apr 21 12:43:27 ifs sync_client[29518]: Processing sync log file  
/var/lib/imap/sync/log-run failed: Bad protocol


3. There is 27 GB of about 45 GB replicated and there is no further progress

4. How to find out why the replica doesn't respond?



You can enable telemetry logging on the replica by creating a folder  
/var/lib/imap/log/
where  is the value of the  sync_authname. If you give  
cyrus write permissions for this folder
it will create log-files for each process and what it received and  
send from/to the sync_client with timestamps.


Also you can try to use strace on the syncserver process to figure out  
which files it is accessing




On 2020-04-21 11:18, Olaf Frączyk wrote:

I also found out that when I see on master:

Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: Error in do_sync(): bailing  
out! Bad protocol
Apr 21 11:12:38 ifs sync_client[27996]: Processing sync log file  
/var/lib/imap/sync/log-run failed: Bad protocol


I also get on the replica:

Apr 21 11:12:38 skink1 imap[5775]: Connection reset by peer,  
closing connection


But I also see, that before it happens, there is no activity both  
on the replica and on the master for some time.


So maybe the imap server process is not recovering correctly in the  
longlock situation?


On 2020-04-21 11:07, Olaf Frączyk wrote:

Hi,

When I run sync_client -r on the master I see the following on the replica:

Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.olaf for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.piotr for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.olaf for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock  
navi.pl!user.piotr for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in  
subfolders. The inbox folders have about 20,000 messages, the  
subfolders upto 15,000


Could it cause problems?


the longlock is normaly not the problem. While on process has the lock
no other process can write to the mailbox, but on the replica there normaly
is no other process that should access the mailbox



Maybe I should move the sync_client from START section to  
SERVICES, it seems that it is not automatically restarted



I havend tried starting sync_client in the service section.
I start the sync_client via systemd.


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync():  
bailing out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.




M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk

The current situation is:

1. Replica:

stopped and started the replica

no activity on replica - iotop and top show nothing

the only messages on replica is incoming connection from master

2. Master:

when I run sync_client -r I still get:

Apr 21 12:38:36 ifs sync_client[29518]: Reprocessing sync log file 
/var/lib/imap/sync/log-run
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length response to 
MAILBOXES (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: IOERROR: zero length response to 
RESTART (idle for too long)
Apr 21 12:43:27 ifs sync_client[29518]: Error in do_sync(): bailing out! 
Bad protocol
Apr 21 12:43:27 ifs sync_client[29518]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


3. There is 27 GB of about 45 GB replicated and there is no further progress

4. How to find out why the replica doesn't respond?

On 2020-04-21 11:18, Olaf Frączyk wrote:

I also found out that when I see on master:

Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length response 
to MAILBOXES (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length response 
to RESTART (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: Error in do_sync(): bailing 
out! Bad protocol
Apr 21 11:12:38 ifs sync_client[27996]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


I also get on the replica:

Apr 21 11:12:38 skink1 imap[5775]: Connection reset by peer, closing 
connection


But I also see, that before it happens, there is no activity both on 
the replica and on the master for some time.


So maybe the imap server process is not recovering correctly in the 
longlock situation?


On 2020-04-21 11:07, Olaf Frączyk wrote:

Hi,

When I run sync_client -r on the master I see the following on the 
replica:


Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.olaf for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.olaf for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in 
subfolders. The inbox folders have about 20,000 messages, the 
subfolders upto 15,000


Could it cause problems?

Maybe I should move the sync_client from START section to SERVICES, 
it seems that it is not automatically restarted


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): bailing 
out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.





 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk

I also found out that when I see on master:

Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length response to 
MAILBOXES (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: IOERROR: zero length response to 
RESTART (idle for too long)
Apr 21 11:12:38 ifs sync_client[27996]: Error in do_sync(): bailing out! 
Bad protocol
Apr 21 11:12:38 ifs sync_client[27996]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


I also get on the replica:

Apr 21 11:12:38 skink1 imap[5775]: Connection reset by peer, closing 
connection


But I also see, that before it happens, there is no activity both on the 
replica and on the master for some time.


So maybe the imap server process is not recovering correctly in the 
longlock situation?


On 2020-04-21 11:07, Olaf Frączyk wrote:

Hi,

When I run sync_client -r on the master I see the following on the 
replica:


Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock navi.pl!user.olaf 
for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock navi.pl!user.olaf 
for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock 
navi.pl!user.piotr for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in 
subfolders. The inbox folders have about 20,000 messages, the 
subfolders upto 15,000


Could it cause problems?

Maybe I should move the sync_client from START section to SERVICES, it 
seems that it is not automatically restarted


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length 
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): bailing 
out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.





 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Olaf Frączyk

Hi,

When I run sync_client -r on the master I see the following on the replica:

Apr 21 10:56:15 skink1 imap[5775]: mailbox: longlock navi.pl!user.olaf 
for 1.7 seconds
Apr 21 10:56:20 skink1 imap[5775]: mailbox: longlock navi.pl!user.piotr 
for 2.0 seconds
Apr 21 10:56:23 skink1 imap[5775]: mailbox: longlock navi.pl!user.olaf 
for 2.9 seconds
Apr 21 10:56:26 skink1 imap[5775]: mailbox: longlock navi.pl!user.piotr 
for 3.0 seconds


The mailboxes have several GB in Inbox folder and several GB in 
subfolders. The inbox folders have about 20,000 messages, the subfolders 
upto 15,000


Could it cause problems?

Maybe I should move the sync_client from START section to SERVICES, it 
seems that it is not automatically restarted


On 2020-04-21 08:47, Michael Menge wrote:

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length response 
to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length response 
to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): bailing 
out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.





 


M.Menge    Tel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail: 
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-21 Thread Michael Menge

Hi Olaf


Quoting Olaf Frączyk :


Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length  
response to RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): bailing  
out! Bad protocol


do you see any errors on the syncserver side. The error look like the
sync_client is waiting for a reply of the server.






M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication failed 3.0.5 -> 3.0.13, now 3.0.13->3.0.13

2020-04-20 Thread Olaf Frączyk

Hi,

I upgraded to 3.0.13 but it didn't help.

This time it copied about 18GB

in the logs I still see:

1 - inefficient replication

2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol

But I have no idea what can I do next and why it fails

Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length response to 
MAILBOXES (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: IOERROR: zero length response to 
RESTART (idle for too long)
Apr 21 02:24:46 ifs sync_client[12656]: Error in do_sync(): bailing out! 
Bad protocol
Apr 21 02:24:46 ifs sync_client[12656]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol
Apr 21 03:46:10 ifs sync_client[1353]: auditlog: proxy 
sessionid= 
remote=
Apr 21 03:53:15 ifs sync_client[1353]: inefficient replication (32058 > 
56) navi.pl!user.olaf

Apr 21 03:59:41 ifs sync_client[1353]: sync_client RESTART succeeded
Apr 21 03:59:41 ifs sync_client[1353]: auditlog: proxy 
sessionid= 
remote=
Apr 21 04:01:49 ifs sync_client[1353]: inefficient replication (56949 > 
432) navi.pl!user.piotr

Apr 21 04:10:38 ifs sync_client[1353]: sync_client RESTART succeeded
Apr 21 04:10:38 ifs sync_client[1353]: auditlog: proxy 
sessionid= 
remote=

Apr 21 04:20:39 ifs sync_client[1353]: sync_client RESTART succeeded
Apr 21 04:20:39 ifs sync_client[1353]: auditlog: proxy 
sessionid= 
remote=
Apr 21 04:30:34 ifs sync_client[1353]: IOERROR: zero length response to 
MAILBOXES (idle for too long)
Apr 21 04:30:34 ifs sync_client[1353]: IOERROR: zero length response to 
RESTART (idle for too long)
Apr 21 04:30:34 ifs sync_client[1353]: Error in do_sync(): bailing out! 
Bad protocol


Regards,

Olaf

On 2020-04-20 16:11, Olaf Frączyk wrote:

Hi,

I'm running 3.0.5.

I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.

The replication started but it didn't transfer all mails.

The store isn't big 44GB, transferred was about 24 GB.

In the logs I see:

Apr 20 14:54:03 ifs sync_client[24239]: couldn't authenticate to 
backend server: authentication failure
Apr 20 14:54:13 ifs sync_client[24239]: connect(skink1.navi.pl) 
failed: Connection timed out
Apr 20 14:58:13 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=

Apr 20 15:12:41 ifs sync_client[24239]: sync_client RESTART succeeded
Apr 20 15:12:42 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=
Apr 20 15:12:46 ifs sync_client[24239]: inefficient replication (39865 
> 48) navi.pl!user.info
Apr 20 15:15:31 ifs sync_client[24239]: inefficient replication (32058 
> 56) navi.pl!user.olaf
Apr 20 15:18:50 ifs sync_client[24239]: inefficient replication (15216 
> 13867) navi.pl!user.ania
Apr 20 15:19:18 ifs sync_client[24239]: inefficient replication (56949 
> 432) navi.pl!user.piotr

Apr 20 15:25:11 ifs sync_client[24239]: sync_client RESTART succeeded
Apr 20 15:25:12 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=
Apr 20 15:29:09 ifs sync_client[24239]: IOERROR: zero length response 
to MAILBOXES (idle for too long)
Apr 20 15:29:09 ifs sync_client[24239]: IOERROR: zero length response 
to RESTART (idle for too long)
Apr 20 15:29:09 ifs sync_client[24239]: Error in do_sync(): bailing 
out! Bad protocol
Apr 20 15:29:09 ifs sync_client[24239]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


I also executed thereafter sync client manually:

/usr/local/cyrus-3.0.5/sbin/sync_client -r

but nothing changed, in the log appeared:

Apr 20 15:37:23 ifs sync_client[27304]: Reprocessing sync log file 
/var/lib/imap/sync/log-run


The directory /var/lib/imap/sync/ on master is empty.

The directory /var/spool/imap/sync. on replica is also empty

What can I try to get it woring? Is there a serious problem with 
replication in 3.0.5? I wanted to avoid upgrade on the old machine.


Best regrads,

Olaf Frączyk



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Replication failed 3.0.5 -> 3.0.13

2020-04-20 Thread Olaf Frączyk

Hi,

I'm running 3.0.5.

I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.

The replication started but it didn't transfer all mails.

The store isn't big 44GB, transferred was about 24 GB.

In the logs I see:

Apr 20 14:54:03 ifs sync_client[24239]: couldn't authenticate to backend 
server: authentication failure
Apr 20 14:54:13 ifs sync_client[24239]: connect(skink1.navi.pl) failed: 
Connection timed out
Apr 20 14:58:13 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=

Apr 20 15:12:41 ifs sync_client[24239]: sync_client RESTART succeeded
Apr 20 15:12:42 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=
Apr 20 15:12:46 ifs sync_client[24239]: inefficient replication (39865 > 
48) navi.pl!user.info
Apr 20 15:15:31 ifs sync_client[24239]: inefficient replication (32058 > 
56) navi.pl!user.olaf
Apr 20 15:18:50 ifs sync_client[24239]: inefficient replication (15216 > 
13867) navi.pl!user.ania
Apr 20 15:19:18 ifs sync_client[24239]: inefficient replication (56949 > 
432) navi.pl!user.piotr

Apr 20 15:25:11 ifs sync_client[24239]: sync_client RESTART succeeded
Apr 20 15:25:12 ifs sync_client[24239]: auditlog: proxy 
sessionid= 
remote=
Apr 20 15:29:09 ifs sync_client[24239]: IOERROR: zero length response to 
MAILBOXES (idle for too long)
Apr 20 15:29:09 ifs sync_client[24239]: IOERROR: zero length response to 
RESTART (idle for too long)
Apr 20 15:29:09 ifs sync_client[24239]: Error in do_sync(): bailing out! 
Bad protocol
Apr 20 15:29:09 ifs sync_client[24239]: Processing sync log file 
/var/lib/imap/sync/log-run failed: Bad protocol


I also executed thereafter sync client manually:

/usr/local/cyrus-3.0.5/sbin/sync_client -r

but nothing changed, in the log appeared:

Apr 20 15:37:23 ifs sync_client[27304]: Reprocessing sync log file 
/var/lib/imap/sync/log-run


The directory /var/lib/imap/sync/ on master is empty.

The directory /var/spool/imap/sync. on replica is also empty

What can I try to get it woring? Is there a serious problem with 
replication in 3.0.5? I wanted to avoid upgrade on the old machine.


Best regrads,

Olaf Frączyk



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication - current status and how to do failover

2020-04-07 Thread Bron Gondwana


On Sun, Apr 5, 2020, at 00:45, Olaf Frączyk wrote:
> Hello,
> 
> 1. Is currently master-master replication possible (maybe 3.2) Is it OK 
> to sync them two-way?

No, not really. It'll mostly be fine, but it doesn't (yet) handle folder 
create/rename/delete safely.

> If yes - how to set up such config?
> 
> 2. If master-master is impossible, is there any guide how to setup 
> failover from master to slave and possibly back? If split-brain happens 
> - is there an easy recovery from such state?

The way we do it at Fastmail is with an nginx proxy in front which knows which 
one is the master. For a clean shutdown, we shut down the master, then run 
sync_client -r -f with the log file (if anything was unreplicated) to make sure 
it's up to date, then shut down both and bring them up with the config pointing 
the replication the other way.

For a case where the master crashed hard, we switch the replica to be master by 
changing the config (with a restart again) then bring the old master back up 
and for sync_client everything again like above to switch back, so all new 
changes from the regular replica are back on the regular master. Then we bring 
up the regular master as master again, and run sync_client -A from there to 
replicate all remaining changes. That mostly works.

The plan in 3.4+ is to use the mailbox tombstone records to get the 
create/rename/delete to the same level of split-brain safety as the UIDs inside 
the mailbox have.

Cheers,

Bron.
-- 
 Bron Gondwana
 br...@fastmail.fm


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Replication - current status and how to do failover

2020-04-04 Thread Olaf Frączyk

Hello,

1. Is currently master-master replication possible (maybe 3.2) Is it OK 
to sync them two-way?


If yes - how to set up such config?

2. If master-master is impossible, is there any guide how to setup 
failover from master to slave and possibly back? If split-brain happens 
- is there an easy recovery from such state?


Best regards,

Olaf


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Public calendars and addressbooks (was RE: Backup compaction optimization in a block-level replication environment)

2019-11-21 Thread ellie timoney
On Wed, Nov 20, 2019, at 4:41 PM, Deborah Pickett wrote:
> > I'm curious how these are working for you, or what sort of configuration
> > and workflows leads to having #calendars and #addressbooks as top-level
> > shared mailboxes?  I've only very recently started learning how our DAV bits
> > work (they have previously been black-boxes for me), and so far have only
> > seen these existing in user accounts.  Maybe this is a separate thread
> > though.
> 
> We used to use public calendars in Exchange (they call them Public Folders)
> for, among other things, a read-only catalogue of who in the office is on
> leave on any given day.  Some of our branch offices also had shared contact
> lists for phone numbers likely to be needed by all people at the local site
> (the local takeaway, the local hardware store, the local clinic...).
> Exchange public folders are almost an exact analogue to shared-namespace
> mailboxes in Cyrus.
> 
> Once I learned the undocumented magic for creating public calendars and
> address books in Cyrus (@karagian on Github posted it:
> https://github.com/cyrusimap/cyrus-imapd/issues/2373#issuecomment-415738943)
> it's worked very well.  My Outlook users use the free Caldav Synchronizer
> plugin (https://caldavsynchronizer.org/)  to sync selected address books and
> calendars to their clients.  I have a Perl script that queries our Active
> Directory server over LDAP to set ACLs on the folders.

That's interesting, thanks!

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Public calendars and addressbooks (was RE: Backup compaction optimization in a block-level replication environment)

2019-11-19 Thread Deborah Pickett
> I'm curious how these are working for you, or what sort of configuration
and workflows leads to having #calendars and #addressbooks as top-level
shared mailboxes?  I've only very recently started learning how our DAV bits
work (they have previously been black-boxes for me), and so far have only
seen these existing in user accounts.  Maybe this is a separate thread
though.

We used to use public calendars in Exchange (they call them Public Folders)
for, among other things, a read-only catalogue of who in the office is on
leave on any given day.  Some of our branch offices also had shared contact
lists for phone numbers likely to be needed by all people at the local site
(the local takeaway, the local hardware store, the local clinic...).
Exchange public folders are almost an exact analogue to shared-namespace
mailboxes in Cyrus.

Once I learned the undocumented magic for creating public calendars and
address books in Cyrus (@karagian on Github posted it:
https://github.com/cyrusimap/cyrus-imapd/issues/2373#issuecomment-415738943)
it's worked very well.  My Outlook users use the free Caldav Synchronizer
plugin (https://caldavsynchronizer.org/)  to sync selected address books and
calendars to their clients.  I have a Perl script that queries our Active
Directory server over LDAP to set ACLs on the folders.

-- 
Deborah Pickett
System Administrator
Polyfoam Australia Pty Ltd



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread ellie timoney
On Wed, Nov 20, 2019, at 11:06 AM, Deborah Pickett wrote:
> On 2019-11-20 10:03, ellie timoney wrote:
>>> foo also includes "#calendars" and "#addressbooks" on my server so there 
are weird characters to deal with.
>>> 
>> Now that's an interesting detail to consider.

>> 
> I should restate my original message because I'm being fast and loose with 
> the meaning of "contains": two of the values for foo on my server are 
> "#calendars" and "#addressbooks". In other words, there are top-level public 
> mailboxes #calendars and #addressbooks which themselves contain sub-calendars 
> and sub-addressbooks. It never occurred to me to have calendar or contacts 
> folders deeper in the normal shared folder namespace, though it has evidently 
> occurred to you.


Oh, I see how I misread that! And... that also complicates things for me, I 
think (well, it's a possibility I hadn't even considered).

I'm curious how these are working for you, or what sort of configuration and 
workflows leads to having #calendars and #addressbooks as top-level shared 
mailboxes? I've only very recently started learning how our DAV bits work (they 
have previously been black-boxes for me), and so far have only seen these 
existing in user accounts. Maybe this is a separate thread though.

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread Deborah Pickett

On 2019-11-20 10:03, ellie timoney wrote:

foo also includes "#calendars" and "#addressbooks" on my server so there
are weird characters to deal with.

Now that's an interesting detail to consider.

I should restate my original message because I'm being fast and loose 
with the meaning of "contains": two of the values for foo on my server 
are "#calendars" and "#addressbooks".  In other words, there are 
top-level public mailboxes #calendars and #addressbooks which themselves 
contain sub-calendars and sub-addressbooks.  It never occurred to me to 
have calendar or contacts folders deeper in the normal shared folder 
namespace, though it has evidently occurred to you.


In any case, I would only use hypothetical_depth = 1 so this wouldn't be 
an issue for my backups.


--
*Deborah Pickett*
System Administrator
Polyfoam
*Polyfoam Australia Pty Ltd*
T: +61 (3) 9794 8320 | F: +61 (3) 9791 1222 | M: +61 408 962 109
E: debb...@polyfoam.com.au  | W: 
www.polyfoam.com.au

/Proudly Australian owned and operated for over 30 years/

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-19 Thread ellie timoney
On Tue, Nov 19, 2019, at 9:38 AM, Deborah Pickett wrote:
> > Food for thought.  Maybe instead of having one "%SHARED" backup, having one 
> > "%SHARED.foo" backup per top-level shared folder would be a better 
> > implementation?  I haven't seen shared folders used much in practice, so 
> > it's interesting to hear about it.
> >
> > Looking at your own data, if you had one "%SHARED.foo" backup per top level 
> > shared folder, would they be roughly user-sized pieces, or still too big?  
> > If too big, how deep would you need to go down the tree until the worst 
> > offenders are a manageable size?  (If I make it split shared folders like 
> > this, maybe "how-deep-to-split-shared-folders" needs to be a configuration 
> > parameter, because I guess it'll vary from installation to installation.)
> >
> For my data, %SHARED.foo would be the perfect granularity level. Each 
> foo is a shared email address like "sales" or "accounts" and it gets 
> about as much traffic as a user account does.  (Two months ago when we 
> were on Exchange, they _were_ user accounts.)

Ah yep, that makes sense!
 
> foo also includes "#calendars" and "#addressbooks" on my server so there 
> are weird characters to deal with.

Now that's an interesting detail to consider.  I think, with a hypothetical 
depth setting, I would treat any level that contains '#directories' as being 
"too deep" for splitting, regardless of the depth setting, because at that 
point we're looking at things that I guess we expect to belong together.  Like, 
if hypothetical_depth is 3, but foo.#calendars exists, then I think we'd want 
to treat the entirety of foo as a single backup (as if hypothetical_depth were 
1), regardless of what else is deeper in there.  I need to think about this 
more.

I'm gonna have a go at implementing this (I've opened 
https://github.com/cyrusimap/cyrus-imapd/issues/2915) but I'll step through it 
one level of complication at a time.

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Cyrus 3, automation and master-master replication and mailbox movement

2019-11-19 Thread Egoitz Aurrekoetxea via Info-cyrus
Good morning,

I have been checking how could we manage for automating master -> slave and 
slave->master transition. I though one possibility could be having both servers 
configured in master mode with each being replicating to the other one. I know 
this was some time ago unsupported and have tried if it worked now in a testing 
env but it seems it fails too… Could any Cyrus guru confirm that really this 
does not work (just for avoiding driving myself crazy trying to find the config 
issue)?.

By the way, I’m in progress too of automating mailbox movements… from 
partition, from server… I had a question about renaming mailboxes for moving 
from cyrus partition… as a safety measure, prior to launch a renm operation 
(renm user/a...@bb.es <mailto:user/a...@bb.es> user/a...@bb.es 
<mailto:user/a...@bb.es> different-partition)  we block any kind of access to 
that mailbox (even mail delivering)… and I was wondering if that is really 
necessary nowadays…. or does Cyrus hold that locks by it’s own?. I mean does 
Cyrus take care by it’s own, of avoiding mailbox corruption due to a renm 
mailbox to a different partition?.

Just one more question… when we move a mailbox from a partition (a renm to 
different partition) to another one… we usually do : 

- stop replication between master/slave (as a safety measure for having a very 
last “fall back” if the renm goes wrong). You know, promoting the slave to 
master would have the mailbox of the failed  renaming operation properly...
- renm in the master
- after successful rename, delete from the slave the mailboxes
- sync each of the master mailboxes to the slave… this way among other things, 
the removed mailboxes in the slave (the dm is done in the slave for causing 
mailboxes to be resynced again from the master to the slave to it’s new 
location in the slave)
- start replication again…

Are all this steps really necessary?. What do you think about it?.

Best regards,
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-18 Thread Deborah Pickett

Food for thought.  Maybe instead of having one "%SHARED" backup, having one 
"%SHARED.foo" backup per top-level shared folder would be a better implementation?  I 
haven't seen shared folders used much in practice, so it's interesting to hear about it.

Looking at your own data, if you had one "%SHARED.foo" backup per top level shared 
folder, would they be roughly user-sized pieces, or still too big?  If too big, how deep would you 
need to go down the tree until the worst offenders are a manageable size?  (If I make it split 
shared folders like this, maybe "how-deep-to-split-shared-folders" needs to be a 
configuration parameter, because I guess it'll vary from installation to installation.)

For my data, %SHARED.foo would be the perfect granularity level. Each 
foo is a shared email address like "sales" or "accounts" and it gets 
about as much traffic as a user account does.  (Two months ago when we 
were on Exchange, they _were_ user accounts.)


foo also includes "#calendars" and "#addressbooks" on my server so there 
are weird characters to deal with.


--
Deborah Pickett
System Administrator
Polyfoam Australia Pty Ltd

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Cyrus doesn't preserve hard-links on replication

2019-11-18 Thread Adrien Remillieux
Sorry for the multiples emails...
The option "provide_uuid=1" I found in my last message seems to be
unrecognized by cyrus now (It's probably on by default). It's probably only
useful for new messages anyway.

So I'm back to square one and my google-fu failed me. If someone know how
to solve that problem any help would be greatly appreciated.

Cheers,
Adrien

Le dim. 17 nov. 2019 à 18:00,  a
écrit :

> Send Info-cyrus mailing list submissions to
> info-cyrus@lists.andrew.cmu.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
> or, via email, send a message with subject or body 'help' to
> info-cyrus-requ...@lists.andrew.cmu.edu
>
> You can reach the person managing the list at
> info-cyrus-ow...@lists.andrew.cmu.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Info-cyrus digest..."
>
>
> Today's Topics:
>
>1. Cyrus doesn't preserve hard-links on replication
>   (Adrien Remillieux)
>
>
> --
>
> Message: 1
> Date: Sun, 17 Nov 2019 13:54:11 +0100
> From: Adrien Remillieux 
> To: info-cyrus@lists.andrew.cmu.edu
> Subject: Cyrus doesn't preserve hard-links on replication
> Message-ID:
>  ak+k8ez8lisz+cw83g1smzu9q...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello,
>
> I set up replication between two cyrus servers (master runs 2.5.10 and
> slave 3.0.8) with plans to decommission the old server once everything is
> working. I noticed that the mail spool takes 950GB instead of ~300GB on the
> old server. I suspected the hardlinks for message deduplication weren't
> recreated so I ran rdfind on the mail spool and the tool found many
> identical files. Is there a cyrus tool to recreate the hardlinks ? I looked
> at the admin tools but I didn't find anything. Rdfind should work but it
> also matched metadata such as cyrus.annotations for example. So I need to
> go through the 600 mB dry-run log file to exclude unwanted files.
>
> 99% of the log file look like that : Are those safe to hardlink ?
>
> # duptype id depth size device inode priority name
> DUPTYPE_FIRST_OCCURRENCE 1030757 3 842 2065 330967752 3
> /var/spool/cyrus/mail/c/user/user1/15384.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 317750405 3
> /var/spool/cyrus/mail/m/user/user2/19262.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 323550741 3
> /var/spool/cyrus/mail/r/user/user3/96106.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316733834 3
> /var/spool/cyrus/mail/m/user/user4/41168.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 314623377 3
> /var/spool/cyrus/mail/m/user/user5/25377.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316201219 3
> /var/spool/cyrus/mail/m/user/user6/49119.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 321991878 3
> /var/spool/cyrus/mail/q/user/user7/46487.
>
> Cheers,
> Adrien
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20191117/60161ca7/attachment-0001.html
> >
>
> --
>
> Subject: Digest Footer
>
> ___
> Info-cyrus mailing list
> Info-cyrus@lists.andrew.cmu.edu
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>
>
> --
>
> End of Info-cyrus Digest, Vol 172, Issue 14
> ***
>

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Cyrus doesn't preserve hard-links on replication

2019-11-18 Thread Adrien Remillieux
By shuffling the keywords on my google searches I was able to find this:
https://lists.andrew.cmu.edu/pipermail/info-cyrus/2006-March/021405.html

Apparently there is a few settings to set to avoid copying the same message
multiple times. This would be a nice addition to the cyrus docs on
replication !

Le dim. 17 nov. 2019 à 18:00,  a
écrit :

> Send Info-cyrus mailing list submissions to
> info-cyrus@lists.andrew.cmu.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
> or, via email, send a message with subject or body 'help' to
> info-cyrus-requ...@lists.andrew.cmu.edu
>
> You can reach the person managing the list at
> info-cyrus-ow...@lists.andrew.cmu.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Info-cyrus digest..."
>
>
> Today's Topics:
>
>1. Cyrus doesn't preserve hard-links on replication
>   (Adrien Remillieux)
>
>
> --
>
> Message: 1
> Date: Sun, 17 Nov 2019 13:54:11 +0100
> From: Adrien Remillieux 
> To: info-cyrus@lists.andrew.cmu.edu
> Subject: Cyrus doesn't preserve hard-links on replication
> Message-ID:
>  ak+k8ez8lisz+cw83g1smzu9q...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello,
>
> I set up replication between two cyrus servers (master runs 2.5.10 and
> slave 3.0.8) with plans to decommission the old server once everything is
> working. I noticed that the mail spool takes 950GB instead of ~300GB on the
> old server. I suspected the hardlinks for message deduplication weren't
> recreated so I ran rdfind on the mail spool and the tool found many
> identical files. Is there a cyrus tool to recreate the hardlinks ? I looked
> at the admin tools but I didn't find anything. Rdfind should work but it
> also matched metadata such as cyrus.annotations for example. So I need to
> go through the 600 mB dry-run log file to exclude unwanted files.
>
> 99% of the log file look like that : Are those safe to hardlink ?
>
> # duptype id depth size device inode priority name
> DUPTYPE_FIRST_OCCURRENCE 1030757 3 842 2065 330967752 3
> /var/spool/cyrus/mail/c/user/user1/15384.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 317750405 3
> /var/spool/cyrus/mail/m/user/user2/19262.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 323550741 3
> /var/spool/cyrus/mail/r/user/user3/96106.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316733834 3
> /var/spool/cyrus/mail/m/user/user4/41168.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 314623377 3
> /var/spool/cyrus/mail/m/user/user5/25377.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316201219 3
> /var/spool/cyrus/mail/m/user/user6/49119.
> DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 321991878 3
> /var/spool/cyrus/mail/q/user/user7/46487.
>
> Cheers,
> Adrien
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20191117/60161ca7/attachment-0001.html
> >
>
> --
>
> Subject: Digest Footer
>
> ___
> Info-cyrus mailing list
> Info-cyrus@lists.andrew.cmu.edu
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>
>
> --
>
> End of Info-cyrus Digest, Vol 172, Issue 14
> ***
>

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-17 Thread ellie timoney
> Related: I had to apply the patch described in
> (https://www.mail-archive.com/info-cyrus@lists.andrew.cmu.edu/msg47320.html),
>  "backupd IOERROR reading backup files larger than 2GB", because during
> initial population of my backup, chunks tended to by multiple GB in size
> (my %SHARED user backup is 23 GB, compressed).  Will this patch be
> merged to a main line?

Those were on master but I'm not sure why I didn't cherry-pick them back to 
3.0 Anyway, I've done that now, they'll be in the next release.

> Progress report: I started with very large chunks (minimum 64 MB,
> maximum 1024 MB) and a threshold of 8 chunks but I found that compaction
> was running every time, even on a backup file that hardly changed.  Not
> certain why this would be; my current theory is that in chunks that size
> there is almost always some benefit to compacting, so the threshold is
> passed easily.  There were 41 chunks in my %SHARED backup.

Hmm.  Yeah, the threshold is "number of chunks that would benefit from 
compaction", so the larger the chunks, the more likely any given chunk is to 
benefit from compaction, and the more likely you are to trip that threshold.

On Sat, Nov 16, 2019, at 12:10 PM, Deborah Pickett wrote:
> Further progress report: with small chunks, compaction takes about 15 
> times longer.  It's almost as if there is an O(n^2) complexity 
> somewhere, looking at the rate that the disk file grows.  (Running perf 
> on a compaction suggests that 90% of the time ctl_backups is doing 
> compression, decompression, or calculating SHA1 hashes.) So I'm going 
> back to large-ish chunks again.  Current values:
> 
> backup_compact_minsize: 1024
> backup_compact_maxsize: 65536
> backup_compact_work_threshold: 10
> 
> The compression ratio was hardly any different (less than 1%) with many 
> small chunks compared with huge chunks.

That's really interesting to hear.  It sounds like maybe the startup and 
cleanup of a gzip stream are more expensive than the compression/decompression 
parts, so it's cheaper to aim for fewer larger chunks than many smaller ones.  

zlib provides a range of 0-9 (default: 6) for whether to prioritise speed (0) 
or size (9) in its compression algorithm, but the backup system isn't using it 
in a way that exposes this as a tunable option (it's just letting it use the 
default by default).  With enough data it might be interesting to make it 
tunable and see what impact it has, but I don't think we're at a stage of 
needing to care this much yet.

> Setting the work threshold to a number greater than 1 is only helping a 
> bit.  I think that the huge disparity between my smaller and larger user 
> backups is hurting me here.  Whatever I set the threshold to, it is 
> going to be simultaneously too large for most users, and too small for 
> the huge %SHARED user.

Food for thought.  Maybe instead of having one "%SHARED" backup, having one 
"%SHARED.foo" backup per top-level shared folder would be a better 
implementation?  I haven't seen shared folders used much in practice, so it's 
interesting to hear about it.

Looking at your own data, if you had one "%SHARED.foo" backup per top level 
shared folder, would they be roughly user-sized pieces, or still too big?  If 
too big, how deep would you need to go down the tree until the worst offenders 
are a manageable size?  (If I make it split shared folders like this, maybe 
"how-deep-to-split-shared-folders" needs to be a configuration parameter, 
because I guess it'll vary from installation to installation.)

> Confession time: having inspected the source of ctl_backups, I admit to 
> misunderstanding what happens to chunks when compaction is triggered.  I 
> thought that each chunk was examined, and either the chunk is compacted, 
> or it is not (and the bytes in the chunk are copied from old to new 
> unchanged).  But compaction happens to the entire file: every chunk in 
> turn is inflated to /tmp and then deflated again from /tmp, minus any 
> messages that may have expired, so the likelihood of the compressed byte 
> stream being the same is slim.  That will confound the rsync rolling 
> checksum algorithm and the entire backup file will likely have to be 
> transmitted again.

Yeah, these files are append-only even within the backup system's own tooling.  
Compacting a backup file to be smaller is literally re-streaming it to a new 
file, minus bits that aren't needed anymore, and then (if all goes well) 
renaming it back over the original.  It's meant to be atomic -- either it 
works, and you get the updated file, or something goes wrong, and the file is 
unchanged.  It's never modified in place!  (There's a note about this somewhere 
in the documentation, with regard to needing enough free disk space to write 
the second file in order to compact the first.)
 
> With that in mind I've decided that I'll make compaction a weekend-only 
> task, take it out of cyrus.conf EVENTS and put a weekly cron/systemd job 
> in place.  

Cyrus doesn't preserve hard-links on replication

2019-11-17 Thread Adrien Remillieux
Hello,

I set up replication between two cyrus servers (master runs 2.5.10 and
slave 3.0.8) with plans to decommission the old server once everything is
working. I noticed that the mail spool takes 950GB instead of ~300GB on the
old server. I suspected the hardlinks for message deduplication weren't
recreated so I ran rdfind on the mail spool and the tool found many
identical files. Is there a cyrus tool to recreate the hardlinks ? I looked
at the admin tools but I didn't find anything. Rdfind should work but it
also matched metadata such as cyrus.annotations for example. So I need to
go through the 600 mB dry-run log file to exclude unwanted files.

99% of the log file look like that : Are those safe to hardlink ?

# duptype id depth size device inode priority name
DUPTYPE_FIRST_OCCURRENCE 1030757 3 842 2065 330967752 3
/var/spool/cyrus/mail/c/user/user1/15384.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 317750405 3
/var/spool/cyrus/mail/m/user/user2/19262.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 323550741 3
/var/spool/cyrus/mail/r/user/user3/96106.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316733834 3
/var/spool/cyrus/mail/m/user/user4/41168.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 314623377 3
/var/spool/cyrus/mail/m/user/user5/25377.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 316201219 3
/var/spool/cyrus/mail/m/user/user6/49119.
DUPTYPE_WITHIN_SAME_TREE -1030757 3 842 2065 321991878 3
/var/spool/cyrus/mail/q/user/user7/46487.

Cheers,
Adrien

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-15 Thread Deborah Pickett
Further progress report: with small chunks, compaction takes about 15 
times longer.  It's almost as if there is an O(n^2) complexity 
somewhere, looking at the rate that the disk file grows.  (Running perf 
on a compaction suggests that 90% of the time ctl_backups is doing 
compression, decompression, or calculating SHA1 hashes.) So I'm going 
back to large-ish chunks again.  Current values:


backup_compact_minsize: 1024
backup_compact_maxsize: 65536
backup_compact_work_threshold: 10

The compression ratio was hardly any different (less than 1%) with many 
small chunks compared with huge chunks.


Setting the work threshold to a number greater than 1 is only helping a 
bit.  I think that the huge disparity between my smaller and larger user 
backups is hurting me here.  Whatever I set the threshold to, it is 
going to be simultaneously too large for most users, and too small for 
the huge %SHARED user.


Confession time: having inspected the source of ctl_backups, I admit to 
misunderstanding what happens to chunks when compaction is triggered.  I 
thought that each chunk was examined, and either the chunk is compacted, 
or it is not (and the bytes in the chunk are copied from old to new 
unchanged).  But compaction happens to the entire file: every chunk in 
turn is inflated to /tmp and then deflated again from /tmp, minus any 
messages that may have expired, so the likelihood of the compressed byte 
stream being the same is slim.  That will confound the rsync rolling 
checksum algorithm and the entire backup file will likely have to be 
transmitted again.


With that in mind I've decided that I'll make compaction a weekend-only 
task, take it out of cyrus.conf EVENTS and put a weekly cron/systemd job 
in place.  During the week backups will be append-only, to keep rsync 
happy.  At weekends, compaction will combine the last week of small 
chunks, and I've got all weekend to transmit the hundred GB of backup 
files offsite.


--
Deborah Pickett
System Administrator
Polyfoam Australia Pty Ltd


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-14 Thread Deborah Pickett

On 2019-11-11 11:10, ellie timoney wrote:

This setting might be helpful:

Thanks, I saw that setting but didn't really think through how it would
help me.  I'll experiment with it and report back.

That would be great, thanks!


Progress report: I started with very large chunks (minimum 64 MB, 
maximum 1024 MB) and a threshold of 8 chunks but I found that compaction 
was running every time, even on a backup file that hardly changed.  Not 
certain why this would be; my current theory is that in chunks that size 
there is almost always some benefit to compacting, so the threshold is 
passed easily.  There were 41 chunks in my %SHARED backup.


I'm now trying very small chunks (no minimum size, maximum 128 kB) with 
varying thresholds.  This is probably _too_ small (smaller than even 
some messages).  I'll bisect the difference and see if there is a sweet 
spot.


I've settled on rsync as the transport protocol for sending the backups 
off site.  Its rolling-checksum algorithm means that even if a chunk 
moves within the file it can still be transmitted efficiently, saving me 
from needing all that fragmentation guff I started this thread with.


Related: I had to apply the patch described in 
(https://www.mail-archive.com/info-cyrus@lists.andrew.cmu.edu/msg47320.html), 
"backupd IOERROR reading backup files larger than 2GB", because during 
initial population of my backup, chunks tended to by multiple GB in size 
(my %SHARED user backup is 23 GB, compressed).  Will this patch be 
merged to a main line?


--
*Deborah Pickett*
System Administrator
*Polyfoam Australia Pty Ltd*


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-10 Thread ellie timoney
On Fri, Nov 8, 2019, at 1:35 PM, Deborah Pickett wrote:
> I didn't know if copying 
> the filesystem of a (paused) Cyrus replica was a supported way of 
> backing up, but now I do.

Yeah, as long as there are no cyrus processes running, the database/index files 
can just be copied about and won't be corrupted along the way.  This kind of 
backup is useful for a full system restore, but the "help I deleted an 
important email and then emptied my trash and then expunged, and now I need it 
back" type of recoveries... I guess you copy the message file back to the 
partition, reconstruct, and it comes back as a new message (unread, no flags, 
etc)?

You would need to be careful of the window between delivery of a message, 
replication to the replica, and deletion of the message (and replication of the 
deletion), to ensure you get a backup of the state where the message existed.  
I *think* delayed_delete and a long cyr_expire -D time takes care of this, but 
I'm not certain, so please test it before you rely on it.  Also (maybe 
obviously) the need to keep point in time snapshots for as long as your 
recovery policy dictates, and not just delete stuff from backup as soon as its 
deleted from source.

I'm not sure what others are doing in this space really.  There's a few threads 
on the list archive about various backup strategies, but my focus has mainly 
been the backupd-based system.

>  Is there a list of which database and index 
> files I need to copy apart from the files inside the partition structure?

This kind of covers it, I think:
https://www.cyrusimap.org/imap/reference/admin/locations/configuration-state.html

It would be quite useful to have a "this is what you need to back up" document, 
but at the moment there's a certain amount of reading between the lines of 
adjacent documentation :(

> > This setting might be helpful:
> 
> Thanks, I saw that setting but didn't really think through how it would 
> help me.  I'll experiment with it and report back.

That would be great, thanks!

Cheers,

ellie

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-07 Thread Deborah Pickett

On 2019-11-08 09:13, ellie timoney wrote:

I'm not sure if I'm just not understanding, but if the chunk offsets were to 
remain the same, then there's no benefit to compaction? A (say) 2gb file full 
of zeroes between small chunks is still the same 2gb on disk as one that's 
never been compacted at all!


That's true.  I suppose I'm imagining a threshold, where if the file 
hits, say, 20% wasted space, then I can "defrag" the file and recover 
the lost space, on the understanding that the next sync will have to 
copy the entire file again.


But you mentioned:


And if you don't use the compaction feature, you might as well skip the backups 
system entirely, and have your backup server just be a normal replica that 
doesn't accept client traffic (maybe with a very long cyr_expire -D time?), and 
then you shut it down on schedule for safe block/file system backups to your 
offsite location.
... and that seems a more reasonable approach.  I didn't know if copying 
the filesystem of a (paused) Cyrus replica was a supported way of 
backing up, but now I do.  Is there a list of which database and index 
files I need to copy apart from the files inside the partition structure?

This setting might be helpful:


   backup_compact_work_threshold: 1
   The  number of chunks that must obviously need compaction before 
the com‐
   pact tool will go ahead with the compaction.  If set to  less  
than  one,
   the value is treated as being one.

If you set your backup_compact_min/max_sizes to a size that's 
comfortable/practical for your block backup algorithm, but then set a very lax 
backup_compact_work_threshold, you might be able to find a sweet spot where 
you're getting the benefits of compaction eventually, but are not constantly 
changing every block in the file (until you do).  The default (1) is basically 
for compaction to occur as soon as there's something to compact out, just 
because the default had to be something, and without experiential data any 
other value would just be a hat rabbit.  But this sounds like a case where a 
big number would play nicer.

I guess I'd try to target a minimum size of 1 disk block per chunk, and a 
maximum of (fair dice roll) 4 disk blocks? But you'd need some experimentation 
to figure out ballpark numbers, and won't be able to tune it to exact block 
sizes, because the configured thresholds are the uncompressed data size, not 
the compressed chunk size on disk.


Thanks, I saw that setting but didn't really think through how it would 
help me.  I'll experiment with it and report back.


--
*Deborah Pickett*
System Administrator
*Polyfoam Australia Pty Ltd*

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Backup compaction optimization in a block-level replication environment

2019-11-07 Thread ellie timoney
I'm not sure if I'm just not understanding, but if the chunk offsets were to 
remain the same, then there's no benefit to compaction? A (say) 2gb file full 
of zeroes between small chunks is still the same 2gb on disk as one that's 
never been compacted at all!

And if you don't use the compaction feature, you might as well skip the backups 
system entirely, and have your backup server just be a normal replica that 
doesn't accept client traffic (maybe with a very long cyr_expire -D time?), and 
then you shut it down on schedule for safe block/file system backups to your 
offsite location.

> Right now I've set backup_compact_minsize and backup_compact_maxsize to 
> zero but I'm not sure if even that is sufficient to prevent chunk 
> offsets moving.  Perhaps I need to disable the compaction event in 
> cyrus.conf entirely.

I don't have this system entirely in my head at the moment so I'm kinda just 
reading documentation here, but these settings are about optimising the gzip 
algorithm.  Each chunk is compressed separately, and the tradeoff here is that 
bigger chunks compress better, but if the file becomes corrupted somehow you 
lose entire chunks, so smaller chunks are safer.

> A compromise would need to be 
> struck between keeping chunk offsets fixed and wasted fragmented space 
> between chunks as they shrink.

This setting might be helpful:

>   backup_compact_work_threshold: 1
>   The  number of chunks that must obviously need compaction 
> before the com‐
>   pact tool will go ahead with the compaction.  If set to  less  
> than  one,
>   the value is treated as being one.

If you set your backup_compact_min/max_sizes to a size that's 
comfortable/practical for your block backup algorithm, but then set a very lax 
backup_compact_work_threshold, you might be able to find a sweet spot where 
you're getting the benefits of compaction eventually, but are not constantly 
changing every block in the file (until you do).  The default (1) is basically 
for compaction to occur as soon as there's something to compact out, just 
because the default had to be something, and without experiential data any 
other value would just be a hat rabbit.  But this sounds like a case where a 
big number would play nicer.

I guess I'd try to target a minimum size of 1 disk block per chunk, and a 
maximum of (fair dice roll) 4 disk blocks? But you'd need some experimentation 
to figure out ballpark numbers, and won't be able to tune it to exact block 
sizes, because the configured thresholds are the uncompressed data size, not 
the compressed chunk size on disk.

On Wed, Nov 6, 2019, at 8:20 PM, Deborah Pickett wrote:
> (Sorry, that's a lot of big words.  I'll try explaining what I want to do.)
> 
> On my LAN I have a Cyrus IMAP server (3.0.11), and a dedicated Cyrus 
> backup server (patched with Ellie's shared-mailbox and 64-bit fseek 
> fixes).  These are connected by a nice fat link so backups happen fast 
> and often.  A scheduled compaction occurs each morning thanks to an 
> event in cyrus.conf.
> 
> I now want to back up the backups to an off-site server over a much 
> slower link.  The off-site server doesn't speak the Cyrus sync 
> protocol.  What it does do well is block-level backups: if only a part 
> of a file has changed, only that part needs to be transferred over the 
> slow link.  [I haven't decided whether my technology will be the rsync 
> --checksum protocol, or Synology NAS XFS replication, or Microsoft 
> Server VFS snapshots.  They all do block-level backups well.]
> 
> Since Cyrus backup files are append-only, they should behave well with 
> block-level backups. But—correct me if I'm wrong—compaction is going to 
> ruin my day because a reduction in the size of chunk (say) 5 moves the 
> start offset of chunk 6 (and so on).  Even if chunk 6 doesn't change 
> it'll have to be retransmitted in its entirety.
> 
> Right now I've set backup_compact_minsize and backup_compact_maxsize to 
> zero but I'm not sure if even that is sufficient to prevent chunk 
> offsets moving.  Perhaps I need to disable the compaction event in 
> cyrus.conf entirely.
> 
> I really want compaction, though, or else my backups are going to get 
> very, very big.
> 
> Which leads me to my idea.  What if compaction could be friendlier 
> towards block-level backups, by deliberately avoiding changing chunk 
> offsets in the backup file, even if that means gaps of unused bytes when 
> (the aforementioned) chunk 5 shrinks?  It won't always work out, for 
> instance when a chunk grows in size. A compromise would need to be 
> struck between keeping chunk offsets fixed and wasted fragmented space 
> between chunks as they shrink.
> 
> I haven't collected enough data to know if I am making the right 
> assumptions about how chunk si

Backup compaction optimization in a block-level replication environment

2019-11-06 Thread Deborah Pickett

(Sorry, that's a lot of big words.  I'll try explaining what I want to do.)

On my LAN I have a Cyrus IMAP server (3.0.11), and a dedicated Cyrus 
backup server (patched with Ellie's shared-mailbox and 64-bit fseek 
fixes).  These are connected by a nice fat link so backups happen fast 
and often.  A scheduled compaction occurs each morning thanks to an 
event in cyrus.conf.


I now want to back up the backups to an off-site server over a much 
slower link.  The off-site server doesn't speak the Cyrus sync 
protocol.  What it does do well is block-level backups: if only a part 
of a file has changed, only that part needs to be transferred over the 
slow link.  [I haven't decided whether my technology will be the rsync 
--checksum protocol, or Synology NAS XFS replication, or Microsoft 
Server VFS snapshots.  They all do block-level backups well.]


Since Cyrus backup files are append-only, they should behave well with 
block-level backups. But—correct me if I'm wrong—compaction is going to 
ruin my day because a reduction in the size of chunk (say) 5 moves the 
start offset of chunk 6 (and so on).  Even if chunk 6 doesn't change 
it'll have to be retransmitted in its entirety.


Right now I've set backup_compact_minsize and backup_compact_maxsize to 
zero but I'm not sure if even that is sufficient to prevent chunk 
offsets moving.  Perhaps I need to disable the compaction event in 
cyrus.conf entirely.


I really want compaction, though, or else my backups are going to get 
very, very big.


Which leads me to my idea.  What if compaction could be friendlier 
towards block-level backups, by deliberately avoiding changing chunk 
offsets in the backup file, even if that means gaps of unused bytes when 
(the aforementioned) chunk 5 shrinks?  It won't always work out, for 
instance when a chunk grows in size. A compromise would need to be 
struck between keeping chunk offsets fixed and wasted fragmented space 
between chunks as they shrink.


I haven't collected enough data to know if I am making the right 
assumptions about how chunk size evolves over time and how effective 
compaction is at removing cruft from a backup file.  Has anyone thought 
about doing something like this with Cyrus backups?


--
Deborah Pickett
System Administrator
Polyfoam Australia Pty Ltd

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-17 Thread Adrien Remillieux
Thanks ! I'll look into it.

Le lun. 16 sept. 2019 à 18:01,  a
écrit :

> Date: Sun, 15 Sep 2019 13:04:31 -0600
> From: Scott Lambert 
> To: info-cyrus@lists.andrew.cmu.edu
> Subject: Re: Possible issue when upgrading to cyrus 3.0.8 using
> replication ?
> Message-ID: 
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> If you can create the mailboxes on the new server, without replication,
> perhaps it would be safer/less downtime to use IMAPsync to move the data
> to the new server.? It will be slow, but I don't mind slow while the
> source server is still online and users are happy.
>
> On 9/14/19 5:12 PM, Adrien Remillieux wrote:
> > Thank you for your answer !
> >
> > Considering what you said I'll try to enable replication on the new
> > server. If it doesn't work I'll just schedule some downtime, copy the
> > /var/spool/cyrus folder to the new server, install cyrus 3.0.11 from
> > the backports and then upgrade the mailboxes in place.
> >
> > We've been using cyrus since 2004 so there's definitely a lot of old
> > mailboxes around and I don't which versions of cyrus were used.
> >
> > Cheers,
> > Adrien
> >
> > Le?dim. 15 sept. 2019 ??00:35, Adrien Remillieux
> > mailto:adrien.remilli...@gmail.com>> a
> > ?crit?:
> >
> > Date: Fri, 13 Sep 2019 10:20:17 +1000
> > From: "ellie timoney" mailto:el...@fastmail.com
> >>
> > To: info-cyrus@lists.andrew.cmu.edu
> > <mailto:info-cyrus@lists.andrew.cmu.edu>
> > Subject: Re: Possible issue when upgrading to cyrus 3.0.8 using
> > ? ? ? ? replication ?
> > Message-ID: <343a16a2-f5a2-4130-aae0-6a4994ab9...@www.fastmail.com
> > <mailto:343a16a2-f5a2-4130-aae0-6a4994ab9...@www.fastmail.com>>
> > Content-Type: text/plain; charset="us-ascii"
> >
> > Hi Adrien,
> >
> > The replication upgrade path should be okay. In-place upgrades
> > (that would use the affected reconstruct to bring mailboxes up to
> > the same version as the server) would get bitten. Whereas if you
> > replicate to a newer version server, the mailboxes on the replica
> > will be created at the replica's preferred version already, so you
> > don't need to reconstruct afterwards.
> >
> > If you have messages that would theoretically be affected by this
> > bug in 3.0, you won't be able to replicate them to 3.0 in the
> > first place, because I think replication won't allow the 0 modseq.
> > If this arises, I'm not sure how to recover from it and replicate
> > the affected messages, since 2.4 and 2.5 won't alter the 0 modseq.
> > If it can't replicate them, it will complain about it, so if you
> > plan for the replication needing some handholding/restarting,
> > you'll at least be able to identify which messages are broken in
> > the process, and then figure out how to handle it once you know
> > the size of the problem?
> >
> > Another option, if you want to stick with the Debian packages,
> > would be to skip 3.0.8 and install 3.0.11 from buster-backports
> > (https://packages.debian.org/buster-backports/cyrus-imapd), and
> > then you'll be immune to the problem. Though you still won't be
> > able to replicate the affected messages to the new server, hmm.
> >
> > Cheers,
> >
> > ellie
> >
> > On Thu, Sep 12, 2019, at 6:50 AM, Adrien Remillieux wrote:
> > > Hello,
> > >
> > > I have a server that I can't update running cyrus 2.5.10 which
> > contain mailboxes that have existed from 2.3 and earlier (around
> > 300Gb total). My plan is to update by enabling replication with a
> > new server running Debian Buster (so cyrus 3.0.8) and then
> > shutting down the old server. There was a problem when upgrading
> > to 3.x.x with mailboxes created with cyrus 2.3 or before and that
> > was fixed in 3.0.11 (see
> >
> https://www.cyrusimap.org/imap/download/release-notes/3.0/x/3.0.11.html
> > and https://github.com/cyrusimap/cyrus-imapd/issues/2839 for the
> > bug report)
> > >
> > > Does this upgrade path suffer from the same issue ? I am not
> > familiar with the inner-workings of cyrus. It appears that the
> > Debian maintainers have not backported the patch in 3.0.8 (see
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933163 and I
> > looked at the source code)
> > >
> > > Ch

Re: Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-12 Thread ellie timoney
Hi Adrien,

The replication upgrade path should be okay. In-place upgrades (that would use 
the affected reconstruct to bring mailboxes up to the same version as the 
server) would get bitten. Whereas if you replicate to a newer version server, 
the mailboxes on the replica will be created at the replica's preferred version 
already, so you don't need to reconstruct afterwards.

If you have messages that would theoretically be affected by this bug in 3.0, 
you won't be able to replicate them to 3.0 in the first place, because I think 
replication won't allow the 0 modseq. If this arises, I'm not sure how to 
recover from it and replicate the affected messages, since 2.4 and 2.5 won't 
alter the 0 modseq. If it can't replicate them, it will complain about it, so 
if you plan for the replication needing some handholding/restarting, you'll at 
least be able to identify which messages are broken in the process, and then 
figure out how to handle it once you know the size of the problem?

Another option, if you want to stick with the Debian packages, would be to skip 
3.0.8 and install 3.0.11 from buster-backports 
(https://packages.debian.org/buster-backports/cyrus-imapd), and then you'll be 
immune to the problem. Though you still won't be able to replicate the affected 
messages to the new server, hmm.

Cheers,

ellie

On Thu, Sep 12, 2019, at 6:50 AM, Adrien Remillieux wrote:
> Hello,
> 
> I have a server that I can't update running cyrus 2.5.10 which contain 
> mailboxes that have existed from 2.3 and earlier (around 300Gb total). My 
> plan is to update by enabling replication with a new server running Debian 
> Buster (so cyrus 3.0.8) and then shutting down the old server. There was a 
> problem when upgrading to 3.x.x with mailboxes created with cyrus 2.3 or 
> before and that was fixed in 3.0.11 (see 
> https://www.cyrusimap.org/imap/download/release-notes/3.0/x/3.0.11.html and 
> https://github.com/cyrusimap/cyrus-imapd/issues/2839 for the bug report)
> 
> Does this upgrade path suffer from the same issue ? I am not familiar with 
> the inner-workings of cyrus. It appears that the Debian maintainers have not 
> backported the patch in 3.0.8 (see 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933163 and I looked at the 
> source code)
> 
> Cheers,
> Adrien
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Possible issue when upgrading to cyrus 3.0.8 using replication ?

2019-09-11 Thread Adrien Remillieux
Hello,

I have a server that I can't update running cyrus 2.5.10 which contain
mailboxes that have existed from 2.3 and earlier (around 300Gb total). My
plan is to update by enabling replication with a new server running Debian
Buster (so cyrus 3.0.8) and then shutting down the old server. There was a
problem when upgrading to 3.x.x with mailboxes created with cyrus 2.3 or
before and that was fixed in 3.0.11 (see
https://www.cyrusimap.org/imap/download/release-notes/3.0/x/3.0.11.html and
https://github.com/cyrusimap/cyrus-imapd/issues/2839 for the bug report)

Does this upgrade path suffer from the same issue ? I am not familiar with
the inner-workings of cyrus. It appears that the Debian maintainers have
not backported the patch in 3.0.8 (see
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933163 and I looked at
the source code)

Cheers,
Adrien

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-18 Thread Egoitz Aurrekoetxea
That thread is clarified. Was an issue from a script from Sarenet…. It has been 
hard to find (as described in the new thread) but Cyrus was just fine.



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es <mailto:undefined>
www.sarenet.es <http://www.sarenet.es/>
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 10 jul 2019, a las 10:03, Egoitz Aurrekoetxea  escribió:
> 
> The subject of this email is not properly set… it should be . Issues in 
> replication with folder subscription and Sieve
> 
> As I think I discovered something I reopen a new thread with the title 
> properly
> 
>  
> 
> 
> 
> Egoitz Aurrekoetxea
> Dpto. de sistemas
> 944 209 470
> Parque Tecnológico. Edificio 103
> 48170 Zamudio (Bizkaia)
> ego...@sarenet.es <mailto:undefined>
> www.sarenet.es <http://www.sarenet.es/>
> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> 
>> El 10 jul 2019, a las 9:22, Albert Shih > <mailto:albert.s...@obspm.fr>> escribió:
>> 
>> Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit
>>> By the way, for your case I would recommend doing a script that does a get 
>>> from
>>> dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a 
>>> much
>>> more cleaner way…
>> 
>> Yes, it is what I did, before I try de sync I event do a
>> /usr/local/cyrsus/sievec on each file to by absolute sure the sieve file
>> compile correctly
>> 
>> Regards.
>> 
>> 
>> --
>> Albert SHIH
>> DIO bâtiment 15
>> Observatoire de Paris
>> xmpp: j...@obspm.fr <mailto:j...@obspm.fr>
>> Heure local/Local time:
>> Wed 10 Jul 2019 09:20:09 AM CEST
> 
> 
> Cyrus Home Page: http://www.cyrusimap.org/ <http://www.cyrusimap.org/>
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ 
> <http://lists.andrew.cmu.edu/pipermail/info-cyrus/>
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus 
> <https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus>

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Egoitz Aurrekoetxea
It would be better to just talk daemons IMHO…. That should work… but sometimes… 
perhaps another things could be implied… so… the most clear way, should be to 
just put as a normal client with a expect script for instance all the source 
scripts to Cyrus… but using sieveshell… and not using files directly (not 
copying) anything to Cyrus partitions “by hand”…

As said IMHO…

Cheers!



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es 
www.sarenet.es 
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 10 jul 2019, a las 9:22, Albert Shih  escribió:
> 
> Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit
>> By the way, for your case I would recommend doing a script that does a get 
>> from
>> dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a 
>> much
>> more cleaner way…
> 
> Yes, it is what I did, before I try de sync I event do a
> /usr/local/cyrsus/sievec on each file to by absolute sure the sieve file
> compile correctly
> 
> Regards.
> 
> 
> --
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> xmpp: j...@obspm.fr
> Heure local/Local time:
> Wed 10 Jul 2019 09:20:09 AM CEST


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Egoitz Aurrekoetxea
The subject of this email is not properly set… it should be . Issues in 
replication with folder subscription and Sieve

As I think I discovered something I reopen a new thread with the title properly

 



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es <mailto:undefined>
www.sarenet.es <http://www.sarenet.es/>
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 10 jul 2019, a las 9:22, Albert Shih  escribió:
> 
> Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit
>> By the way, for your case I would recommend doing a script that does a get 
>> from
>> dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a 
>> much
>> more cleaner way…
> 
> Yes, it is what I did, before I try de sync I event do a
> /usr/local/cyrsus/sievec on each file to by absolute sure the sieve file
> compile correctly
> 
> Regards.
> 
> 
> --
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> xmpp: j...@obspm.fr
> Heure local/Local time:
> Wed 10 Jul 2019 09:20:09 AM CEST


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Albert Shih
Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit
> By the way, for your case I would recommend doing a script that does a get 
> from
> dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a 
> much
> more cleaner way…

Yes, it is what I did, before I try de sync I event do a
/usr/local/cyrsus/sievec on each file to by absolute sure the sieve file
compile correctly

Regards.


--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
xmpp: j...@obspm.fr
Heure local/Local time:
Wed 10 Jul 2019 09:20:09 AM CEST

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-10 Thread Albert Shih
Le 09/07/2019 à 22:44:19+0200, Egoitz Aurrekoetxea a écrit
Hi,

>
> If instead of -A you used -u for each of your users did it worked? Or did it

If I remember correctly (but I not sure) this is how I find out the
problem.

  Try time -A, notice it crash,

so try -u first_user, notice it work

  try again -A, notice it crash again (second user)

try -u second_user, notice it work,

  try -A, notice it crash but for the nth>>1 user

try -u (n+1)th user notice it work

  try to find the difference between nth user who crash and the (n-1)th
  user who work, find out the only difference is the presence of sieve

  Try the infinite loop with -A.

So in fact I never really try the -u with a non working user.


> crashed in the same user as with -A?. Which Cyrus version were you running?.

Don't sure but something like 3.0.4 (or 3.0.5)

--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
France
xmpp: j...@obspm.fr
Heure local/Local time:
Wed 10 Jul 2019 09:14:51 AM CEST

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
By the way, for your case I would recommend doing a script that does a get from 
dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a much 
more cleaner way…

Cheers!



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es 
www.sarenet.es 
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 9 jul 2019, a las 22:44, Egoitz Aurrekoetxea  escribió:
> 
> Hi Albert,
> 
> If instead of -A you used -u for each of your users did it worked? Or did it 
> crashed in the same user as with -A?. Which Cyrus version were you running?. 
> 
> Cheers,
> 
> 
> 
> Egoitz Aurrekoetxea
> Dpto. de sistemas
> 944 209 470
> Parque Tecnológico. Edificio 103
> 48170 Zamudio (Bizkaia)
> ego...@sarenet.es 
> www.sarenet.es 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> 
>> El 9 jul 2019, a las 15:03, Albert Shih > > escribió:
>> 
>> Le 09/07/2019 à 14:10:49+0200, Egoitz Aurrekoetxea a écrit
>>> Good morning,
>>> 
>>> 
>>> After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas 
>>> didn't
>>> have some folders (or all) subscribed the same way they had in previous env 
>>> in
>>> Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the
>>> content itself was perfectly copied. It was like, if the copy between 
>>> versions
>>> would not had fully succeed. We fixed it easily by creating some scripts for
>>> checking folder subscriptions and Sieve scripts existence. We though it 
>>> could
>>> perhaps had something to do with some issue replicating folders 
>>> subscriptions
>>> and sieve scripts from 2.3 to 3.0.8. After that, as we fixed it easily and 
>>> was
>>> nothing related to content, we just didn’t go in deep in this topic.
>> 
>> After my transfering all mail from my old server (dovecot) to the new one 
>> (under
>> cyrus), I try to initialize the sync, so I launch the synchronization, and
>> find out everytime the user got a sieve, the sync processus crash, 
>> butevent the
>> it crash it still create « something », so the next time I launch the sync
>> it pass (and email a actually synchronize). So for the first synchro I
>> juste launch a infinite loop in bash to synchronize all user.
>> 
>> I known it's not a very satisfying method but it work.
>> 
>> With new user I don't have any problem.
>> 
>> I already send a email here but don't get any solution
>> 
>>https://lists.andrew.cmu.edu/pipermail/info-cyrus/2018-May/040186.html 
>> 
>> 
>> Don't know if that help
>> 
>> Regards
>> --
>> Albert SHIH
>> DIO bâtiment 15
>> Observatoire de Paris
>> 5 Place Jules Janssen
>> 92195 Meudon Cedex
>> France
>> xmpp: j...@obspm.fr 
>> Heure local/Local time:
>> Tue 09 Jul 2019 02:57:46 PM CEST
>> 
>> Cyrus Home Page: http://www.cyrusimap.org/ 
>> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ 
>> 
>> To Unsubscribe:
>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus 
>> 

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Hi Albert,

If instead of -A you used -u for each of your users did it worked? Or did it 
crashed in the same user as with -A?. Which Cyrus version were you running?. 

Cheers,



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es 
www.sarenet.es 
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 9 jul 2019, a las 15:03, Albert Shih  escribió:
> 
> Le 09/07/2019 à 14:10:49+0200, Egoitz Aurrekoetxea a écrit
>> Good morning,
>> 
>> 
>> After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas 
>> didn't
>> have some folders (or all) subscribed the same way they had in previous env 
>> in
>> Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the
>> content itself was perfectly copied. It was like, if the copy between 
>> versions
>> would not had fully succeed. We fixed it easily by creating some scripts for
>> checking folder subscriptions and Sieve scripts existence. We though it could
>> perhaps had something to do with some issue replicating folders subscriptions
>> and sieve scripts from 2.3 to 3.0.8. After that, as we fixed it easily and 
>> was
>> nothing related to content, we just didn’t go in deep in this topic.
> 
> After my transfering all mail from my old server (dovecot) to the new one 
> (under
> cyrus), I try to initialize the sync, so I launch the synchronization, and
> find out everytime the user got a sieve, the sync processus crash, 
> butevent the
> it crash it still create « something », so the next time I launch the sync
> it pass (and email a actually synchronize). So for the first synchro I
> juste launch a infinite loop in bash to synchronize all user.
> 
> I known it's not a very satisfying method but it work.
> 
> With new user I don't have any problem.
> 
> I already send a email here but don't get any solution
> 
>https://lists.andrew.cmu.edu/pipermail/info-cyrus/2018-May/040186.html
> 
> Don't know if that help
> 
> Regards
> --
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> 5 Place Jules Janssen
> 92195 Meudon Cedex
> France
> xmpp: j...@obspm.fr
> Heure local/Local time:
> Tue 09 Jul 2019 02:57:46 PM CEST
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Albert Shih
Le 09/07/2019 à 14:10:49+0200, Egoitz Aurrekoetxea a écrit
> Good morning,
>
>
> After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas 
> didn't
> have some folders (or all) subscribed the same way they had in previous env in
> Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the
> content itself was perfectly copied. It was like, if the copy between versions
> would not had fully succeed. We fixed it easily by creating some scripts for
> checking folder subscriptions and Sieve scripts existence. We though it could
> perhaps had something to do with some issue replicating folders subscriptions
> and sieve scripts from 2.3 to 3.0.8. After that, as we fixed it easily and was
> nothing related to content, we just didn’t go in deep in this topic.

After my transfering all mail from my old server (dovecot) to the new one (under
cyrus), I try to initialize the sync, so I launch the synchronization, and
find out everytime the user got a sieve, the sync processus crash, butevent 
the
it crash it still create « something », so the next time I launch the sync
it pass (and email a actually synchronize). So for the first synchro I
juste launch a infinite loop in bash to synchronize all user.

I known it's not a very satisfying method but it work.

With new user I don't have any problem.

I already send a email here but don't get any solution

https://lists.andrew.cmu.edu/pipermail/info-cyrus/2018-May/040186.html

Don't know if that help

Regards
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
France
xmpp: j...@obspm.fr
Heure local/Local time:
Tue 09 Jul 2019 02:57:46 PM CEST

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Could perhaps, some of this something to do, with having intermediate folders 
subscribed/unsubscribed in the middle of the tree? And that to cause something 
like it that perhaps is not caused when the mailbox is being accessed by the 
user instead of being replicated (when the change is applied by 
synchronization)?.



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es 
www.sarenet.es 
Antes de imprimir este correo electrónico piense si es necesario hacerlo.

> El 9 jul 2019, a las 14:10, Egoitz Aurrekoetxea  escribió:
> 
> Good morning,
> 
> 
> After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas 
> didn't have some folders (or all) subscribed the same way they had in 
> previous env in Cyrus 2.3. Same happened for some users with Sieve scripts. 
> It seemed the content itself was perfectly copied. It was like, if the copy 
> between versions would not had fully succeed. We fixed it easily by creating 
> some scripts for checking folder subscriptions and Sieve scripts existence. 
> We though it could perhaps had something to do with some issue replicating 
> folders subscriptions and sieve scripts from 2.3 to 3.0.8. After that, as we 
> fixed it easily and was nothing related to content, we just didn’t go in deep 
> in this topic.
> 
> When we started running couples of servers in Cyrus 3.0.8 (upgraded and 
> apparently with all the change process finished) we have seen in some cases 
> that folder subscription was not properly replicated again. Both members of 
> the couple of servers where running same Cyrus version, the 3.0.8 and this 
> issue, was not seen in all users… just in some of them… Due to this reason I 
> have started checking the git repo logs… for trying to see some perhaps 
> related change or similar… some commit that could very clearly affect to this 
> issue (that could have caused it). I have had no success.
> 
> So after that, and after not being able to reproduce it, have taken a look at 
> the code. By the nature of the things seen, I supposed that perhaps a META 
> operation failed could be involved. I say it because in sync_do_meta() I can 
> read : 
> 
> r = sync_response_parse(sync_be->in, "META", NULL,
> replica_subs, replica_sieve, replica_seen, NULL);
> if (!r) r = sync_do_user_seen(userid, replica_seen, sync_be, flags);
> if (!r) r = sync_do_user_sub(userid, replica_subs, sync_be, flags);
> if (!r) r = sync_do_user_sieve(userid, replica_sieve, sync_be, flags);
> 
> Have been looking for some hours around it but have not been able to see 
> nothing strange… nothing that could have caused this...
> 
> Have you ever heard about this issue?.
> 
> 
> Best regards,
> 
> 
> 
> Egoitz Aurrekoetxea
> Dpto. de sistemas
> 944 209 470
> Parque Tecnológico. Edificio 103
> 48170 Zamudio (Bizkaia)
> ego...@sarenet.es 
> www.sarenet.es 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> 
> 
> Cyrus Home Page: http://www.cyrusimap.org/ 
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ 
> 
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus 
> 

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Issues with replication and folder/Sieve subscription

2019-07-09 Thread Egoitz Aurrekoetxea
Good morning,


After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas didn't 
have some folders (or all) subscribed the same way they had in previous env in 
Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the 
content itself was perfectly copied. It was like, if the copy between versions 
would not had fully succeed. We fixed it easily by creating some scripts for 
checking folder subscriptions and Sieve scripts existence. We though it could 
perhaps had something to do with some issue replicating folders subscriptions 
and sieve scripts from 2.3 to 3.0.8. After that, as we fixed it easily and was 
nothing related to content, we just didn’t go in deep in this topic.

When we started running couples of servers in Cyrus 3.0.8 (upgraded and 
apparently with all the change process finished) we have seen in some cases 
that folder subscription was not properly replicated again. Both members of the 
couple of servers where running same Cyrus version, the 3.0.8 and this issue, 
was not seen in all users… just in some of them… Due to this reason I have 
started checking the git repo logs… for trying to see some perhaps related 
change or similar… some commit that could very clearly affect to this issue 
(that could have caused it). I have had no success.

So after that, and after not being able to reproduce it, have taken a look at 
the code. By the nature of the things seen, I supposed that perhaps a META 
operation failed could be involved. I say it because in sync_do_meta() I can 
read : 

r = sync_response_parse(sync_be->in, "META", NULL,
replica_subs, replica_sieve, replica_seen, NULL);
if (!r) r = sync_do_user_seen(userid, replica_seen, sync_be, flags);
if (!r) r = sync_do_user_sub(userid, replica_subs, sync_be, flags);
if (!r) r = sync_do_user_sieve(userid, replica_sieve, sync_be, flags);

Have been looking for some hours around it but have not been able to see 
nothing strange… nothing that could have caused this...

Have you ever heard about this issue?.


Best regards,



Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es 
www.sarenet.es 
Antes de imprimir este correo electrónico piense si es necesario hacerlo.


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Question about what is replicated with Cyrus replication and what isn't

2019-02-17 Thread egoitz
Hi! 

Previously (in 2.3 and older versions), cyr_expire and ipurge actions
for instance where not replicated to the slave. So, you needed to launch
them in both, the master and the slave. My question is, are now
replicated as mailbox replication commands?. What about commands like
Squatter -F for removing in Cyrus 3 from the database non existing
mails?. Or when you move data from search tiers in Xapian?. Are those
actions replicated or should be launched in the master and the replica?.


Cheers!
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Big problem with replication

2019-01-16 Thread Albert Shih
Le 16/01/2019 à 17:10:30+0100, Egoitz Aurrekoetxea a écrit
> Good afternoon,
> 
> 
> I would try doing it user by user (with -u). This way you would have all 
> synced
> except the problematic mailbox.

Hi, thanks for the help. 

I got some progress in my problem :

>   [root@imap-mirror-p /bals/DELETED]# /usr/local/cyrus/sbin/sync_client -S
> slave_3 -A -v
>   MAILBOXES DELETED.DIO.5AEAD6F9
>   Error from do_user(DELETED.DIO.5AEAD6F9): bailing out!
>   [root@imap-mirror-p /bals/DELETED]#
> 
> and the DIO folder don't event exist
> 
>   [root@imap-mirror-p /bals/DELETED]# ls DIO
>   ls: DIO: No such file or directory
>   [root@imap-mirror-p /bals/DELETED]#
> 

For some strange reason the I was unable to destroy the mailbox either
(with cyradm), so I copy some junk mailbox on the filesystem, and run
reconstruct and finally I'm was able to destroy those mailbox.

But that's not really solve my problem because now when I run the
sync_client he crash at the beginning with a shared mailbox. It stop with 


[root@imap-mirror-p /usr/home/jas-adm]# /usr/local/cyrus/sbin/sync_client -S 
imap-mirror-m-tmp -A -v
MAILBOXES shared.*
MAILBOX shared.*
Error from do_user(shared.*): bailing out!
[root@imap-mirror-p /usr/home/jas-adm]# 

I've no idea if it's normal or not. I don't think so, because the first
level (Master -- replica --> slave_1) work well event with thoses
shared_mailbox.

Any help would be very welcome.

Regards.
--
Albert SHIH
Heure local/Local time:
Wed Jan 16 18:11:53 CET 2019

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Big problem with replication

2019-01-16 Thread Egoitz Aurrekoetxea
Good afternoon, 

I would try doing it user by user (with -u). This way you would have all
synced except the problematic mailbox. 

Cheers!

---

EGOITZ AURREKOETXEA 
Departamento de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [1] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 16-01-2019 16:15, Albert Shih escribió:

> Hi everyone.
> 
> I've got some big issue with replication.
> 
> I've 
> 
> master --- replica ---> slave_1 --- replica ---> slave_2
> 
> The replication between master and slave_1 work nice. 
> 
> Between slave_1 and slave_2 I've got some issue (log to big after network
> failure and work nagios_supervision). 
> 
> So now I'm trying to build a new slave_3 to replace slave_2. And I'm unable
> to launch sync_client. Each time I try to manually launch I got 
> 
> [root@imap-mirror-p /bals/DELETED]# /usr/local/cyrus/sbin/sync_client -S 
> slave_3 -A -v
> MAILBOXES DELETED.DIO.5AEAD6F9
> Error from do_user(DELETED.DIO.5AEAD6F9): bailing out!
> [root@imap-mirror-p /bals/DELETED]# 
> 
> and the DIO folder don't event exist
> 
> [root@imap-mirror-p /bals/DELETED]# ls DIO
> ls: DIO: No such file or directory
> [root@imap-mirror-p /bals/DELETED]# 
> 
> Any help would be *very* welcome ;-) 
> 
> Regards.
> 
> --
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> Heure local/Local time:
> Wed Jan 16 16:08:47 CET 2019
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
 

Links:
--
[1] http://www.sarenet.es
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Big problem with replication

2019-01-16 Thread Albert Shih
Hi everyone.

I've got some big issue with replication.

I've 
  
master --- replica ---> slave_1 --- replica ---> slave_2

The replication between master and slave_1 work nice. 

Between slave_1 and slave_2 I've got some issue (log to big after network
failure and work nagios_supervision). 

So now I'm trying to build a new slave_3 to replace slave_2. And I'm unable
to launch sync_client. Each time I try to manually launch I got 

  [root@imap-mirror-p /bals/DELETED]# /usr/local/cyrus/sbin/sync_client -S 
slave_3 -A -v
  MAILBOXES DELETED.DIO.5AEAD6F9
  Error from do_user(DELETED.DIO.5AEAD6F9): bailing out!
  [root@imap-mirror-p /bals/DELETED]# 

and the DIO folder don't event exist

  [root@imap-mirror-p /bals/DELETED]# ls DIO
  ls: DIO: No such file or directory
  [root@imap-mirror-p /bals/DELETED]# 


Any help would be *very* welcome ;-) 

Regards.

--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
Heure local/Local time:
Wed Jan 16 16:08:47 CET 2019

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Question about manual replication (-u )

2019-01-08 Thread Egoitz Aurrekoetxea
Thanks a lot Bron!!! :) :)

---

EGOITZ AURREKOETXEA 
Departamento de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [1] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 08-01-2019 12:02, Bron Gondwana escribió:

> Yep, that's totally safe. Even doing the same user twice at the same time 
> should be safe, though it may do extra work. 
> 
> Bron. 
> 
> On Tue, Jan 8, 2019, at 05:05, Egoitz Aurrekoetxea wrote: 
> 
> Good afternoon, 
> 
> I know it seems a pretty stupid question, but some time ago, you cannot have 
> a Cyrus server acting for instance as a master and as slave... it was not 
> supported... worked... but not supported... so having multiple sync_client 
> instances... perhaps could damage something (although I suppose not) I'll 
> probably have the same dount for ctl_conversations -z and -b multiple 
> parallel commands... 
> 
> I know it seems ridiculous... but I ask it because I prefer to try get some 
> knowledge from Cyrus gurus... :) :) 
> 
> Cheers!
> 
> --- 
> 
> EGOITZ AURREKOETXEA 
> Departamento de sistemas 
> 
> 944 209 470 
> Parque Tecnológico. Edificio 103 
> 48170 Zamudio (Bizkaia) 
> ego...@sarenet.es
> 
> www.sarenet.es [1] 
> 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo. 
> 
> El 03-01-2019 16:32, Egoitz Aurrekoetxea escribió: 
> 
> Good afternoon, 
> 
> Is it possible to launch several instances of 
> "/usr/local/cyrus/bin/sync_client -S DEST-HOST -v -u EMAIL" in parallel?. 
> Doing it just one mailbox at a time takes ages It would help me a lot, 
> the fact of parallelizing and have no disk bottleneck issues 
> 
> I think it should be possible... isn't it?. Perhaps it just allowed between 
> same version in source and dest?. Or can be done for instance too, with a 2.4 
> as master to 3.0 slave?. 
> 
> Cheers!!
> 
> -- 
> 
> EGOITZ AURREKOETXEA 
> Departamento de sistemas 
> 
> 944 209 470 
> Parque Tecnológico. Edificio 103 
> 48170 Zamudio (Bizkaia) 
> ego...@sarenet.es
> 
> www.sarenet.es [1] 
> 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo. 
>  
> Cyrus Home Page: http://www.cyrusimap.org/ 
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ 
> To Unsubscribe: 
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

-- 
  Bron Gondwana, CEO, FastMail Pty Ltd 
  br...@fastmailteam.com 


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus 

Links:
--
[1] http://www.sarenet.es
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Question about manual replication (-u )

2019-01-08 Thread Bron Gondwana
Yep, that's totally safe. Even doing the same user twice at the same time 
should be safe, though it may do extra work.

Bron.

On Tue, Jan 8, 2019, at 05:05, Egoitz Aurrekoetxea wrote:
> Good afternoon,


> 


> I know it seems a pretty stupid question, but some time ago, you cannot have 
> a Cyrus server acting for instance as a master and as slave... it was not 
> supported... worked... but not supported... so having multiple sync_client 
> instances... perhaps could damage something (although I suppose not) I'll 
> probably have the same dount for ctl_conversations -z and -b multiple 
> parallel commands...


> 


> I know it seems ridiculous... but I ask it because I prefer to try get some 
> knowledge from Cyrus gurus... :) :)


> 


> Cheers!


> ---
>  
> sarenet
> *Egoitz Aurrekoetxea*
> Departamento de sistemas
> 944 209 470
> Parque Tecnológico. Edificio 103
> 48170 Zamudio (Bizkaia)
> ego...@sarenet.es
> www.sarenet.es
> 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> 


> El 03-01-2019 16:32, Egoitz Aurrekoetxea escribió:


>> Good afternoon,


>> 


>> Is it possible to launch several instances of 
>> "/usr/local/cyrus/bin/sync_client -S DEST-HOST -v -u EMAIL" in parallel?. 
>> Doing it just one mailbox at a time takes ages It would help me a lot, 
>> the fact of parallelizing and have no disk bottleneck issues


>> 


>> I think it should be possible... isn't it?. Perhaps it just allowed between 
>> same version in source and dest?. Or can be done for instance too, with a 
>> 2.4 as master to 3.0 slave?.


>> 


>> Cheers!!


>> -- 
>>  
>> sarenet
>> *Egoitz Aurrekoetxea*
>> Departamento de sistemas
>> 944 209 470
>> Parque Tecnológico. Edificio 103
>> 48170 Zamudio (Bizkaia)
>> ego...@sarenet.es
>> www.sarenet.es
>> 
>> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

--
 Bron Gondwana, CEO, FastMail Pty Ltd
 br...@fastmailteam.com

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Question about manual replication (-u )

2019-01-07 Thread Egoitz Aurrekoetxea
Good afternoon, 

I know it seems a pretty stupid question, but some time ago, you cannot
have a Cyrus server acting for instance as a master and as slave... it
was not supported... worked... but not supported... so having multiple
sync_client instances... perhaps could damage something (although I
suppose not) I'll probably have the same dount for ctl_conversations
-z and -b multiple parallel commands... 

I know it seems ridiculous... but I ask it because I prefer to try get
some knowledge from Cyrus gurus... :) :) 

Cheers!

---

EGOITZ AURREKOETXEA 
Departamento de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [1] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 03-01-2019 16:32, Egoitz Aurrekoetxea escribió:

> Good afternoon, 
> 
> Is it possible to launch several instances of 
> "/usr/local/cyrus/bin/sync_client -S DEST-HOST -v -u EMAIL" in parallel?. 
> Doing it just one mailbox at a time takes ages It would help me a lot, 
> the fact of parallelizing and have no disk bottleneck issues 
> 
> I think it should be possible... isn't it?. Perhaps it just allowed between 
> same version in source and dest?. Or can be done for instance too, with a 2.4 
> as master to 3.0 slave?. 
> 
> Cheers!!
> 
> -- 
> 
> EGOITZ AURREKOETXEA 
> Departamento de sistemas 
> 944 209 470
> Parque Tecnológico. Edificio 103
> 48170 Zamudio (Bizkaia) 
> ego...@sarenet.es 
> www.sarenet.es [1] 
> Antes de imprimir este correo electrónico piense si es necesario hacerlo.
 

Links:
--
[1] http://www.sarenet.es
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Simple replication question

2018-11-15 Thread Nic Bernstein

On 11/15/18 2:16 AM, Zorg wrote:

I ve one cyrus imap server I want to create a replicated one

I have read the documentation but nothing  explain how two start the 
first replication


If my slave master is empty how can i synchronise them the first time


Once you've got replication configured, simply follow the instructions 
in the Standard Operating Procedures for "Manual Replication" here:

https://cyrusimap.org/imap/reference/admin/sop/replication.html?highlight=replication#manual-replication

To be clear, the "sync_client" command is run on the replica master,  
The "sync_server" on the replica will be automatically started up by the 
'cyr_master' process (may be called 'cyrmaster' or simply 'master', 
depending on version and distro).  The arguments and options in the 
sample command will sync a given user, but you may use any of the 
various options to sync the entire mail store, or parts of it.


Cheers,
    -nic

--
Nic Bernstein n...@onlight.com
Onlight Inc.  www.onlight.com
6525 W Bluemound Rd., Ste 24  v. 414.272.4477
Milwaukee, Wisconsin  53213-4073  f. 414.290.0335


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Simple replication question

2018-11-15 Thread Zorg

Hello

I ve one cyrus imap server I want to create a replicated one

I have read the documentation but nothing  explain how two start the 
first replication


If my slave master is empty how can i synchronise them the first time

Thanks


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
Thank you for your experience.

Best regards!


>Четверг, 13 сентября 2018, 19:26 +05:00 от Michael Menge 
>:
>
>Quoting Evgeniy Kononov < egen...@inbox.ru >:
>
>> Hi!
>>
>> Thank you for reply.
>> Users can connect to only one server at a time. I move the master 
>> server to another hardware and at this time it is necessary for 
>> users to use the mail.
>> If this is not a secure configuration, then can I just run 
>> "sync_client -A" from the master server, and then switch users to a 
>> replica?
>> After that, swap the roles of master-replica between the servers? I'm right ?
>>
>>> We use cyrus aggregator aka cyrus murder, and AFAIK fastmail also uses 
>>> multiple
>>> instances on one server with nginx frontends
>>
>> Can you give an example of the configuration?
>
>Sure,
>
>first of some background Infos:
>
>We recently switched from Cyrus 2.4.20 on SLES 11 SP4 to Cyrus 3.0.8 
>on RHEL 7.5 so consult
>the man pages for your version.
>
>Our Mailserver are running as 6 KVM VMs (RHEV) with 20 GB Ram, 8 Cores each on
>two locations. We have a total of ~44000 accounts, ~457000 Mailboxes, 
>and 2x6.5 TB Mails
>
>Each server is running 3-4 instances. One frontend, two backend/replic
>and on one of the servers the cyrus mupdate master. Each Server on one
>location is paired with one server on the other location for replication
>so in normal operation one backend on server A replicates to a replic on
>server B and the backend on server B replicates to the replica on server A.
>
>Keepalived and ipvs loadbalancer distribute the the load to the 
>frontend servers.
>We use a private subnet for our backend and replic und mupdate instances and a
>service ip address for the frontends.
>
>We move the ip address with the role, so that ma01.mail.localhost on server A
>replicate to sl01.mail.localhost on server B. But if we need to switch 
>to the replic
>we will start it with ma01.mail.localhost on server B
>
>Keeping the master instance for mailbox on the same IP is important, 
>because updating the
>location for all mailboxes in the mupdate master would take to long. 
>(the mupdate protocol
>knows nothing about replication)
>
>
>The main trick to run multiple instances on one server is to use 
>different cyrus.conf
>and imapd.conf files for each instance. We use cyrus_INSTANCE.conf and 
>imapd_INSTANCE.conf
>where INSTANCE is replaced by mu for mupdate, fe for the frontend, be 
>for the first
>backend/replic and re of the second backend/replic
>
>The choosing of "be" and "re" was not the best as it is easily 
>confused with the role
>in wich each of these instances can run.
>
>The masterproces is started with "master -C /etc/imapd_INSTANCE.conf 
>-M /etc/cyrus_INSTANCE.conf -p /var/run/cyrus_instance.pid"
>and in the cyrus_INSTANCE.conf you must also use "-C 
>/etc/imapd_INSTANCE.conf" service, start and event
>"cmd" so that the correct conf file is used. For services you also 
>have to configure "listen="
>so that each instance has its own ip to listen on as only one process 
>can listen on 0.0.0.0 for each port.
>In the imapd_INSTANC.conf many directories must be configured.
>
>We generate the conf files from templates. Where TYPE = INSTANCES
>Here are the main parts of our templates
>
>
>== Cyrus Master 
># cyrus_@@TYPE@@.conf
># Template MD5SUM: @@MD5SUM@@
>
>START {
>@@TYPE@@recover cmd="ctl_cyrusdb -r -C /etc/imapd_@@TYPE@@.conf"
>@@TYPE@@mupdatepush cmd="ctl_mboxlist -m -a -C /etc/imapd_@@TYPE@@.conf"
>@@TYPE@@idled cmd="idled -C /etc/imapd_@@TYPE@@.conf"
>}
>
>SERVICES {
>@@TYPE@@imapcmd="imapd -U 50 -C /etc/imapd_@@TYPE@@.conf" 
>listen="@@HOSTNAME@@:imap" prefork=1 maxfds=1024
>@@TYPE@@imaps   cmd="imapd -U 50 -s -C 
>/etc/imapd_@@TYPE@@.conf" listen="@@HOSTNAME@@:imaps" prefork=1 
>maxfds=1024
>@@TYPE@@pop3cmd="pop3d -C /etc/imapd_@@TYPE@@.conf" 
>listen="@@HOSTNAME@@:pop3" prefork=1 maxfds=1024
>@@TYPE@@pop3s   cmd="pop3d -s -C /etc/imapd_@@TYPE@@.conf" 
>listen="@@HOSTNAME@@:pop3s" prefork=1 maxfds=1024
>@@TYPE@@sieve   cmd="timsieved -C /etc/imapd_@@TYPE@@.conf" 
>listen="@@HOSTNAME@@:sieve" prefork=0 maxfds=1024
>@@TYPE@@lmtpcmd="lmtpd -U 5 -C /etc/imapd_@@TYPE@@.conf" 
>listen="@@HOSTNAME@@:lmtp" prefork=1 maxfds=1024
>@@TYPE@@lmtpunixcmd="lmtpd -U 5 -C /etc/imapd_@@TYPE@@.conf" 
>listen=&qu

Re: master-master replication

2018-09-13 Thread Michael Menge

Quoting Evgeniy Kononov :


Hi!

Thank you for reply.
Users can connect to only one server at a time. I move the master  
server to another hardware and at this time it is necessary for  
users to use the mail.
If this is not a secure configuration, then can I just run  
"sync_client -A" from the master server, and then switch users to a  
replica?

After that, swap the roles of master-replica between the servers? I'm right ?


We use cyrus aggregator aka cyrus murder, and AFAIK fastmail also uses 
multiple
instances on one server with nginx frontends


Can you give an example of the configuration?


Sure,

first of some background Infos:

We recently switched from Cyrus 2.4.20 on SLES 11 SP4 to Cyrus 3.0.8  
on RHEL 7.5 so consult

the man pages for your version.

Our Mailserver are running as 6 KVM VMs (RHEV) with 20 GB Ram, 8 Cores each on
two locations. We have a total of ~44000 accounts, ~457000 Mailboxes,  
and 2x6.5 TB Mails


Each server is running 3-4 instances. One frontend, two backend/replic
and on one of the servers the cyrus mupdate master. Each Server on one
location is paired with one server on the other location for replication
so in normal operation one backend on server A replicates to a replic on
server B and the backend on server B replicates to the replica on server A.

Keepalived and ipvs loadbalancer distribute the the load to the  
frontend servers.

We use a private subnet for our backend and replic und mupdate instances and a
service ip address for the frontends.

We move the ip address with the role, so that ma01.mail.localhost on server A
replicate to sl01.mail.localhost on server B. But if we need to switch  
to the replic

we will start it with ma01.mail.localhost on server B

Keeping the master instance for mailbox on the same IP is important,  
because updating the
location for all mailboxes in the mupdate master would take to long.  
(the mupdate protocol

knows nothing about replication)


The main trick to run multiple instances on one server is to use  
different cyrus.conf
and imapd.conf files for each instance. We use cyrus_INSTANCE.conf and  
imapd_INSTANCE.conf
where INSTANCE is replaced by mu for mupdate, fe for the frontend, be  
for the first

backend/replic and re of the second backend/replic

The choosing of "be" and "re" was not the best as it is easily  
confused with the role

in wich each of these instances can run.

The masterproces is started with "master -C /etc/imapd_INSTANCE.conf  
-M /etc/cyrus_INSTANCE.conf -p /var/run/cyrus_instance.pid"
and in the cyrus_INSTANCE.conf you must also use "-C  
/etc/imapd_INSTANCE.conf" service, start and event
"cmd" so that the correct conf file is used. For services you also  
have to configure "listen="
so that each instance has its own ip to listen on as only one process  
can listen on 0.0.0.0 for each port.

In the imapd_INSTANC.conf many directories must be configured.

We generate the conf files from templates. Where TYPE = INSTANCES
Here are the main parts of our templates


== Cyrus Master 
# cyrus_@@TYPE@@.conf
# Template MD5SUM: @@MD5SUM@@

START {
   @@TYPE@@recover cmd="ctl_cyrusdb -r -C /etc/imapd_@@TYPE@@.conf"
   @@TYPE@@mupdatepush cmd="ctl_mboxlist -m -a -C /etc/imapd_@@TYPE@@.conf"
   @@TYPE@@idled cmd="idled -C /etc/imapd_@@TYPE@@.conf"
}

SERVICES {
   @@TYPE@@imapcmd="imapd -U 50 -C /etc/imapd_@@TYPE@@.conf"  
listen="@@HOSTNAME@@:imap" prefork=1 maxfds=1024
   @@TYPE@@imaps   cmd="imapd -U 50 -s -C  
/etc/imapd_@@TYPE@@.conf" listen="@@HOSTNAME@@:imaps" prefork=1  
maxfds=1024
   @@TYPE@@pop3cmd="pop3d -C /etc/imapd_@@TYPE@@.conf"  
listen="@@HOSTNAME@@:pop3" prefork=1 maxfds=1024
   @@TYPE@@pop3s   cmd="pop3d -s -C /etc/imapd_@@TYPE@@.conf"  
listen="@@HOSTNAME@@:pop3s" prefork=1 maxfds=1024
   @@TYPE@@sieve   cmd="timsieved -C /etc/imapd_@@TYPE@@.conf"  
listen="@@HOSTNAME@@:sieve" prefork=0 maxfds=1024
   @@TYPE@@lmtpcmd="lmtpd -U 5 -C /etc/imapd_@@TYPE@@.conf"  
listen="@@HOSTNAME@@:lmtp" prefork=1 maxfds=1024
   @@TYPE@@lmtpunixcmd="lmtpd -U 5 -C /etc/imapd_@@TYPE@@.conf"  
listen="/srv/cyrus-@@TYPE@@/socket/lmtp" prefork=1 maxfds=1024

}

EVENTS {
   @@TYPE@@checkpointcmd="ctl_cyrusdb -c -C  
/etc/imapd_@@TYPE@@.conf" period=30
   @@TYPE@@delprune  cmd="cyr_expire -E 3 -X 60 -D 60 -C  
/etc/imapd_@@TYPE@@.conf" at=0100

   @@TYPE@@tlsprune  cmd="tls_prune -C /etc/imapd_@@TYPE@@.conf" at=0430
   @@TYPE@@squatter  cmd="squatter -C /etc/imapd_@@TYPE@@.conf -i" at=2200
}

=== Cyrus Replic ==
# cyrus_@@TYPE@@.conf
# Template MD5SUM: @@MD5SUM@@

START {
   @@TYPE@@recover cmd="ctl_cyrusdb -r -C /etc/imapd_@@TYPE@

Re: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
Hi!

Thank you for reply.
Users can connect to only one server at a time. I move the master server to 
another hardware and at this time it is necessary for users to use the mail.
If this is not a secure configuration, then can I just run "sync_client -A" 
from the master server, and then switch users to a replica? 
After that, swap the roles of master-replica between the servers? I'm right ?

>We use cyrus aggregator aka cyrus murder, and AFAIK fastmail also uses 
>multiple
>instances on one server with nginx frontends

Can you give an example of the configuration?

Best regards.

>Четверг, 13 сентября 2018, 13:22 +05:00 от Michael Menge 
>:
>
>Hi,
>
>This setup is NOT SUPPORTED and WILL BREAK if the replication process 
>is triggered
>from the wrong server (user is active on both servers, user switched 
>from one server
>to the other while the sync-log file is still processed, after split 
>brain) and
>some mailboxes have been subscribed, renamed created deleted.
>
>Also there is the risk of a race condition with subscriptions, if a 
>user subscribes
>to multiple folders, the first will trigger a sync from A to B, but as 
>the folder
>is subscribed on B it will trigger a sync from B to A, witch can undo the next
>folder subscription.
>
>These are only some cases that came to my mind. There will be more 
>cases and it
>will be hard to debug. So DON'T DO IT!
>
>What we do is, that we have distributed our users between multiple
>instances, and each server is running one instance as master and one other
>as replic. In case of failure or maintenance we stop the master instance, and
>promote the corresponding replic and configure them so that they will sync
>them back. If the old master is up to date we switch them back.
>
>We use cyrus aggregator aka cyrus murder, and AFAIK fastmail also uses 
>multiple
>instances on one server with nginx frontends
>
>Regards,
>
>Michael
>
>
>
>
>
>Quoting Evgeniy Kononov via Info-cyrus < info-cyrus@lists.andrew.cmu.edu >:
>
>> Sorry! Previous message was sent by mistake.
>>
>> For example, I can configure both servers as follows.
>>
>> Server A.
>> -
>> /etc/cyrus.conf
>> START {
>> ...
>> syncclient       cmd="sync_client -r"
>> ...
>> }
>> SERVICES {
>> ...
>> syncserver       cmd="sync_server" listen="csync"
>> ...
>> }
>>
>> /etc/imapd.conf
>> ...
>> sync_host: SERVER-B
>> sync_authname: admin
>> sync_password: password
>> sync_log: 1
>> sync_repeat_interval: 30
>> sync_timeout: 600
>> sync_shutdown_file: /var/lib/imap/syncstop And the same on server B.
>> -
>> /etc/cyrus.conf
>> START {
>> ...
>> syncclient       cmd="sync_client -r"
>> ...
>> }
>> SERVICES {
>> ...
>> syncserver       cmd="sync_server" listen="csync"
>> ...
>> }
>>
>> /etc/imapd.conf
>> ...
>> sync_host: SERVER-A
>> sync_authname: admin
>> sync_password: password
>> sync_log: 1
>> sync_repeat_interval: 30
>> sync_timeout: 600
>> sync_shutdown_file: /var/lib/imap/syncstop
>> Both server will be as master and as slave in one time.
>>
>> Will there be any problems with this configuration?
>> Thank you. --
>> Evgeniy Kononov
>
>
>
>
>M.MengeTel.: (49) 7071/29-70316
>Universität Tübingen   Fax.: (49) 7071/29-5912
>Zentrum für Datenverarbeitung  mail: 
>michael.me...@zdv.uni-tuebingen.de
>Wächterstraße 76
>72074 Tübingen
>
>
>Cyrus Home Page:  http://www.cyrusimap.org/
>List Archives/Info:  http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>To Unsubscribe:
>https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

-- 
Evgeniy Kononov

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Re[3]: master-master replication

2018-09-13 Thread Michael Menge

Hi,

This setup is NOT SUPPORTED and WILL BREAK if the replication process  
is triggered
from the wrong server (user is active on both servers, user switched  
from one server
to the other while the sync-log file is still processed, after split  
brain) and

some mailboxes have been subscribed, renamed created deleted.

Also there is the risk of a race condition with subscriptions, if a  
user subscribes
to multiple folders, the first will trigger a sync from A to B, but as  
the folder

is subscribed on B it will trigger a sync from B to A, witch can undo the next
folder subscription.

These are only some cases that came to my mind. There will be more  
cases and it

will be hard to debug. So DON'T DO IT!

What we do is, that we have distributed our users between multiple
instances, and each server is running one instance as master and one other
as replic. In case of failure or maintenance we stop the master instance, and
promote the corresponding replic and configure them so that they will sync
them back. If the old master is up to date we switch them back.

We use cyrus aggregator aka cyrus murder, and AFAIK fastmail also uses  
multiple

instances on one server with nginx frontends

Regards,

   Michael





Quoting Evgeniy Kononov via Info-cyrus :


Sorry! Previous message was sent by mistake.

For example, I can configure both servers as follows.

Server A.
-
/etc/cyrus.conf
START {
...
syncclient       cmd="sync_client -r"
...
}
SERVICES {
...
syncserver       cmd="sync_server" listen="csync"
...
}

/etc/imapd.conf
...
sync_host: SERVER-B
sync_authname: admin
sync_password: password
sync_log: 1
sync_repeat_interval: 30
sync_timeout: 600
sync_shutdown_file: /var/lib/imap/syncstop And the same on server B.
-
/etc/cyrus.conf
START {
...
syncclient       cmd="sync_client -r"
...
}
SERVICES {
...
syncserver       cmd="sync_server" listen="csync"
...
}

/etc/imapd.conf
...
sync_host: SERVER-A
sync_authname: admin
sync_password: password
sync_log: 1
sync_repeat_interval: 30
sync_timeout: 600
sync_shutdown_file: /var/lib/imap/syncstop
Both server will be as master and as slave in one time.

Will there be any problems with this configuration?
Thank you. --
Evgeniy Kononov





M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re[3]: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
Sorry! Previous message was sent by mistake.

For example, I can configure both servers as follows.

Server A.
-
/etc/cyrus.conf
START {
...
syncclient       cmd="sync_client -r"
...
}
SERVICES {
...
syncserver       cmd="sync_server" listen="csync"
...
}

/etc/imapd.conf
...
sync_host: SERVER-B
sync_authname: admin
sync_password: password
sync_log: 1
sync_repeat_interval: 30
sync_timeout: 600
sync_shutdown_file: /var/lib/imap/syncstop And the same on server B.
-
/etc/cyrus.conf
START {
...
syncclient       cmd="sync_client -r"
...
}
SERVICES {
...
syncserver       cmd="sync_server" listen="csync"
...
}

/etc/imapd.conf
...
sync_host: SERVER-A
sync_authname: admin
sync_password: password
sync_log: 1
sync_repeat_interval: 30
sync_timeout: 600
sync_shutdown_file: /var/lib/imap/syncstop
Both server will be as master and as slave in one time.

Will there be any problems with this configuration?
Thank you. -- 
Evgeniy Kononov

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re[2]: master-master replication

2018-09-13 Thread Evgeniy Kononov via Info-cyrus
For example, on server A 
-- 
Evgeniy Kononov

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re[2]: master-master replication

2018-09-12 Thread Evgeniy Kononov via Info-cyrus

wait, but if I create a folder on the master, it perfectly syncs to the 
replica. Also when I delete the folder on the master, it is also deleted on the 
replica. It means that information about subscriptions to folders is 
transmitted when synchronizing. but it works only if the client is the server 
where folder was created. then this should work in the opposite direction. I 
have nothing to prevent both servers from configuring syncmaster and syncclient 
at the same time. The only question is whether there will be a loop under such 
a scheme?

Среда, 12 сентября 2018, 16:56 +05:00 от Bron Gondwana :
>Yes!  This is on our roadmap, and I really hope to land it before we release 
>3.2.
>
>The subscriptions are a particularly tricky part of it, because there's 
>currently no change information in the subscriptions database, but I'll make 
>sure that gets added so we can tell if it's a subscription add or subscription 
>remove!
>
>I'm really looking forward to proper master/master safety too :)
>
>Cheers,
>
>Bron.
>
>On Wed, Sep 12, 2018, at 20:10, Evgeniy Kononov via Info-cyrus wrote:
>>Hello!
>>
>>I have two servers with cyrus-imapd 
>>cyrus-imapd-2.5.8-13.3.el7.centos.kolab_16.x86_64
>>One server as master and second as replica.
>>All worked fine when users login on master server, but when I temporary move 
>>users on replica I found some trouble
>>Messages synchronisation from replica to master goes fine if sync_client sees 
>>a mismatch on the master, but if user create folder on replica it isn't sync 
>>on master.
>>Instead of it folder is unsubscribes from the master server and removed from 
>>both server
>>
>>grep UNSUB maillog
>>Sep 10 13:31:41 master sync_client[1456]: UNSUB  u...@example.com 
>>example.com!user.user.foldername
>>
>>Why is it happend ?
>>When I tried manual sync from replica to master server, folder was subscribed.
>>Is it possible that both servers will be master and replica in same time.
>>
>>Thank you.
>>
>>-- 
>>Evgeniy Kononov 
>>
>>Cyrus Home Page:  http://www.cyrusimap.org/
>>List Archives/Info:  http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>>To Unsubscribe:
>>https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>
>--
>  Bron Gondwana, CEO, FastMail Pty Ltd
>   br...@fastmailteam.com
>
>
>
>Cyrus Home Page:  http://www.cyrusimap.org/
>List Archives/Info:  http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>To Unsubscribe:
>https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

-- 
Evgeniy Kononov

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: master-master replication

2018-09-12 Thread Michael Menge

Hi,



Quoting Evgeniy Kononov via Info-cyrus :


Hello!

I have two servers with  
cyrus-imapd cyrus-imapd-2.5.8-13.3.el7.centos.kolab_16.x86_64

One server as master and second as replica.
All worked fine when users login on master server, but when I  
temporary move users on replica I found some trouble
Messages synchronisation from replica to master goes fine if  
sync_client sees a mismatch on the master, but if user create folder  
on replica it isn't sync on master.
Instead of it folder is unsubscribes from the master server and  
removed from both server


grep UNSUB maillog
Sep 10 13:31:41 master sync_client[1456]: UNSUB u...@example.com  
example.com!user.user.foldername


Why is it happend ?
When I tried manual sync from replica to master server, folder was  
subscribed. Is it possible that both servers will be master and  
replica in same time.




Cyrus does not support master-master replication. Because of
CONDSTORE (https://tools.ietf.org/html/rfc4551) Cyrus is able
to handle messages on a master-master setup, but the information
about folder operations is not tracked and so cyrus is unable to
distinguish is a folder was subscribed on one server, or if a folder
was unsubscribed on the other. Same is true for folder
creation/rename/deletion

The master-master replication is a wanted feature but even the 3.0.x
does not support it. I am not sure about the upcoming 3.1.x or master
branch.


Regards,

   Michael




M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

master-master replication

2018-09-12 Thread Evgeniy Kononov via Info-cyrus
Hello!

I have two servers with cyrus-imapd 
cyrus-imapd-2.5.8-13.3.el7.centos.kolab_16.x86_64
One server as master and second as replica.
All worked fine when users login on master server, but when I temporary move 
users on replica I found some trouble
Messages synchronisation from replica to master goes fine if sync_client sees a 
mismatch on the master, but if user create folder on replica it isn't sync on 
master.
Instead of it folder is unsubscribes from the master server and removed from 
both server

grep UNSUB maillog
Sep 10 13:31:41 master sync_client[1456]: UNSUB u...@example.com 
example.com!user.user.foldername

Why is it happend ?
When I tried manual sync from replica to master server, folder was subscribed. 
Is it possible that both servers will be master and replica in same time.

Thank you.

-- 
Evgeniy Kononov
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: switch master/slve replication

2018-06-27 Thread Antonio Conte
* 27/06/2018, Bron Gondwana wrote :
>Yep, that will be enough.  The only thing it might not catch is if
>there are users on the replica which aren't present on the master (for
>whatever reason)... in that case, they will remain on the replica
>still.

ok I check, but I don't think I have other user on replica.

thanks a lot for reply

Antonio

-- 
Never try to teach a pig to sing.
It wastes your time and annoys the pig.

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: switch master/slve replication

2018-06-27 Thread Bron Gondwana
Yep, that will be enough.  The only thing it might not catch is if there
are users on the replica which aren't present on the master (for
whatever reason)... in that case, they will remain on the replica still.
Bron.


On Wed, Jun 27, 2018, at 13:26, Antonio Conte wrote:
> hi all,
> 
> I have to switch a master/replica in rolling replication.
> 
> How I can be sure the replication is totally done ? it's
> enough to run
> 
> /usr/lib/cyrus/sync_client -v -A
> 
> from master ? and then, which operation I have to do ?
> 
> thanks in advance
> 
> --
> Never try to teach a pig to sing.
> It wastes your time and annoys the pig.
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/> To 
> Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

--
  Bron Gondwana, CEO, FastMail Pty Ltd
  br...@fastmailteam.com



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

switch master/slve replication

2018-06-26 Thread Antonio Conte
hi all,

I have to switch a master/replica in rolling replication.

How I can be sure the replication is totally done ? it's
enough to run

/usr/lib/cyrus/sync_client -v -A

from master ? and then, which operation I have to do ?

thanks in advance

-- 
Never try to teach a pig to sing.
It wastes your time and annoys the pig.

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Backup vs replication

2018-05-20 Thread ellie timoney
Hi Albert,

The main logical difference between ordinary replication and the experimental 
backup system in Cyrus 3.0 is that in a replicated system, the replica is a 
copy of the account's current state (as of the last replication).  The backup 
system is a historical record, not just a current state (with configurable 
expiry period for deleted content).

The main storage difference is that the backup system is two-files-per-user, 
whereas replication with normal mailboxes is one-file-per-message.  So the 
backup system has radically lower inode usage, and the gzip compression can 
benefit from seeing the entire user contents (depending on your users' usage 
patterns).

The backup system is mainly intended for organisations who keep their backups 
on online HDD storage, and who want to condense backups for a relatively large 
number of mail servers down to a relatively small number of backup servers 
(hence the massive inode reduction).  If you plan to use tape or other offline 
storage for backups, just use a normal replica so you can stop cyrus during 
backups without causing downtime, and dump its filesystem(s) to tape.

The backup system is also experimental.  Please do not use it as your sole 
backup mechanism.

Cheers,

ellie

On Sat, May 19, 2018, at 6:46 AM, Albert Shih wrote:
> Hi everyone,
> 
> I'm not sure I really understand what's the benefice of backup (cyrus>3.x) vs
> replication ?
> 
> Is the main goal are to save disk space with compression ? Less inode (with
> large file) ?
> 
> I believe the add backup feature to cyrus-imapd was/still very lots of
> work. So what's the advantage of backup vs replication_over_compress_fs.
> 
> Regards
> --
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> xmpp: j...@obspm.fr
> Heure local/Local time:
> Fri May 18 22:42:34 CEST 2018
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Backup vs replication

2018-05-18 Thread Albert Shih
Hi everyone,

I'm not sure I really understand what's the benefice of backup (cyrus>3.x) vs
replication ?

Is the main goal are to save disk space with compression ? Less inode (with
large file) ?

I believe the add backup feature to cyrus-imapd was/still very lots of
work. So what's the advantage of backup vs replication_over_compress_fs.

Regards
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
xmpp: j...@obspm.fr
Heure local/Local time:
Fri May 18 22:42:34 CEST 2018

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Annotation replication with sync_client -u seems busted in 2.4.X

2018-03-20 Thread ellie timoney
Hi John,

This was already fixed in 2.5.0 and later [1], but a lot changed to enable it, 
so that fix was too invasive to backport.  

Your patch looks like a good solution for 2.4!  It's now on the cyrus-imapd-2.4 
branch :)

Cheers,

ellie

[1] 
https://github.com/cyrusimap/cyrus-imapd/commit/f0aa1b38c46722c0203bd0d9630968872a3cda4c

On Wed, Mar 21, 2018, at 5:11 AM, John Capo wrote:
> Replicating annotations when sync_client -u is used to move mailboxes to 
> a different
> server does not work in 2.4.20 and probably not in 2.5.X either.  At 
> lest I can't find
> anyplace in the 2.5 code that replicates folder annotations.
> 
> Annotation replication does work in rolling replication mode.
> 
> Or have I busted it with other mods I make?
> 
> Patch attached that fixes it for me.
> 
> John Capo
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
> Email had 1 attachment:
> + sync_client.c-patch
>   1k (application/octet-stream)

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Annotation replication with sync_client -u seems busted in 2.4.X

2018-03-20 Thread John Capo
Replicating annotations when sync_client -u is used to move mailboxes to a 
different
server does not work in 2.4.20 and probably not in 2.5.X either.  At lest I 
can't find
anyplace in the 2.5 code that replicates folder annotations.

Annotation replication does work in rolling replication mode.

Or have I busted it with other mods I make?

Patch attached that fixes it for me.

John Capo


sync_client.c-patch
Description: Binary data

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Synchronous Replication reg.

2017-08-21 Thread Anant Athavale
Dear Members,

I am already running Cyrus-IMAP for storing mailboxes with quota and sieve
filter features on RHLE 7.

I have the following requirement.

1. The same mailboxes also should be accessible from another site and that
site also should run Cyrus-IMAP ( a kind of replication).
2. The mailbox, quota and sieve filters should get updated on both the
sites irrespective of which Cyrus-IMAP server a client is pointing to.
3. Mail delivery will happen on one server only.

Does Cyrus-IMAP supports this feature? If yes, which version and above this
feature is supported? Which feature of Cyrus-IMAP need to be enabled to
make it work? Is it being already being used in production sites?

Your guidance / pointers required.
-- 
anant athavale.

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: What is the state of Master-Master replication?

2017-07-28 Thread Michael Sofka

On 07/26/2017 04:54 PM, Michael Sofka wrote:
A while back there was some discussion of supporting Master-Master 
replication in Cyrus.  I'm busy updating from 2.4.17 to 3.0.2.  What is 
the state of Master-Master, as opposed to Master-Replica replications?


My current configuration is a Murder cluster with three front-end 
servers, two back-end servers, each with their own replication server. 
One pair of Master-Replica are empty of mailboxes at the moment, prior 
to upgrade.


Continuing my question, what I am thinking is:

   Create imap.conf files on each server:

 Server be1: partition-default /var/spool/imap1
 Server be2: partition-default /var/spool/imap2

  Create replication.conf files on each server:

 Server be1: partition-default /var/spool/imap2
 Server be2: partition-default /var/spool/imap1

replication.conf would be used for the syncserver process on each be server.

The idea is that the imap processes know of one set of partitions, and 
the replication process knows of the other.  Each mupdatepush process 
will only inform the Cyrus front-end master of its own master partition. 
 The IMAP processes will only place it's own partitions in the 
synchronization log.


Thoughts?

Mike

--
Michael D. Sofka   sof...@rpi.edu
ITI Sr. Systems Programmer,   Email, TeX, Epistemology
Rensselaer Polytechnic Institute, Troy, NY.  http://www.rpi.edu/~sofkam/

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


What is the state of Master-Master replication?

2017-07-26 Thread Michael Sofka
A while back there was some discussion of supporting Master-Master 
replication in Cyrus.  I'm busy updating from 2.4.17 to 3.0.2.  What is 
the state of Master-Master, as opposed to Master-Replica replications?


My current configuration is a Murder cluster with three front-end 
servers, two back-end servers, each with their own replication server. 
One pair of Master-Replica are empty of mailboxes at the moment, prior 
to upgrade.


Thank You,
Mike

--
Michael D. Sofka   sof...@rpi.edu
ITI Sr. Systems Programmer,   Email, TeX, Epistemology
Rensselaer Polytechnic Institute, Troy, NY.  http://www.rpi.edu/~sofkam/

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Bron Gondwana via Info-cyrus
Sounds like you have been fiddling mailboxes.db entries and/or didn't clean it 
up properly?

I would recommend sync_reset on the user on the replica for a better cleanup 
than delete.

Oh yeah, don't create the mailbox on the replica before replicating it in, 
that's crazy talk - you'll get a new user with a different uniqueid.  You 
shouldn't be creating users on replicas.

Bron.

On Tue, 23 Aug 2016, at 10:38, Tod A. Sandman via Info-cyrus wrote:
> I resorted to deleting the mailbox on the replication slave and trying to 
> start from scratch, but I get nowhere.
> 
> On slave:
> 
> cyrus@cyrus2c:~> cyradm --user mailadmin `hostname`
> 
>   cyrus2c.mail.rice.edu> dm user/lamemm7
>   cyrus2c.mail.rice.edu> cm --partition cyrus2g user/lamemm7
>   cyrus2c.mail.rice.edu> setacl user/lamemm7 mailadmin lrswipkxtecda
> 
> 
> On master:
> 
> cyrus@cyrus2a:~> sync_client -v -v -l -m "user/lamemm7"
> .
> <1471911949 imap/sync_client[4386]: MAILBOX received NO response: Mailbox has been moved 
> to another server
> imap/sync_client[4386]: do_folders(): update failed: user.lamemm7 'The remote 
> Server(s) denied the operation'
> Error from do_mailboxes(): bailing out!
> imap/sync_client[4386]: Error in do_mailboxes(): bailing out!
> >1471911949>EXIT
> <1471911949 
> 
> What the ??
> 
> Syslog on the master shows
> 
> Aug 22 19:25:49 cyrus2a imap/sync_client[4386]: MAILBOX received NO response: 
> Mailbox has been moved to another server
> Aug 22 19:25:49 cyrus2a imap/sync_client[4386]: do_folders(): update failed: 
> user.lamemm7 'The remote Server(s) denied the operation'
> Aug 22 19:25:49 cyrus2a imap/sync_client[4386]: Error in do_mailboxes(): 
> bailing out!
> 
> syslog on the slave shows only
> 
>   Aug 22 19:25:49 cyrus2c imap/syncserver[15637]: Mailbox uniqueid changed 
> user.lamemm7 (351531f5-4353-4976-b1ef-a8a0bdc243bb => 705dd252531801ab) - 
> retry
> 
> 
> 
> 
> Tod Sandman
> Sr. Systems Administrator
> Office of Information Technology
> Rice University
> Voice: 713.348.5816
> Email: sandm...@rice.edu
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


-- 
  Bron Gondwana
  br...@fastmail.fm

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Tod A. Sandman via Info-cyrus
I resorted to deleting the mailbox on the replication slave and trying to start 
from scratch, but I get nowhere.

On slave:

cyrus@cyrus2c:~> cyradm --user mailadmin `hostname`

  cyrus2c.mail.rice.edu> dm user/lamemm7
  cyrus2c.mail.rice.edu> cm --partition cyrus2g user/lamemm7
  cyrus2c.mail.rice.edu> setacl user/lamemm7 mailadmin lrswipkxtecda


On master:

cyrus@cyrus2a:~> sync_client -v -v -l -m "user/lamemm7"
.
<14719119491471911949>EXIT
<1471911949 705dd252531801ab) - retry




Tod Sandman
Sr. Systems Administrator
Office of Information Technology
Rice University
Voice: 713.348.5816
Email: sandm...@rice.edu

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Bron Gondwana via Info-cyrus
What do you see in syslog?  (both for the reconstruct and later when the 
sync_client runs)

On Tue, 23 Aug 2016, at 03:49, Tod A. Sandman via Info-cyrus wrote:
> I'm using rolling replication with cyrus-imapd-2.5.9.  sync_client died and I 
> am not able to get replication working again.  I've narrowed it down to one 
> mailbox, user.lamemm7, and I've successfully reconstructed the mailbox on 
> both replication partners with various options, such as
> 
> reconstruct -r user/lamemm7
> reconstruct -r -G user/lamemm7
> reconstruct -r -R user/lamemm7
> reconstruct -r -V max user/lamemm7
> 
> These all seem to run fine, but still sync_client fails:
> 
> % sync_client -v -v -l -m user/lamemm7
> .
> <1471887110 imap/sync_client[21652]: MAILBOX received NO response: IMAP_SYNC_CHECKSUM 
> Checksum Failure
> imap/sync_client[21652]: do_folders(): update failed: user.lamemm7 
> 'Replication inconsistency detected'
> Error from do_mailboxes(): bailing out!
> imap/sync_client[21652]: Error in do_mailboxes(): bailing out!
> >1471887110>EXIT
> <1471887110 
> and the sync_client log shows:
> 
> CRC failure on sync for user.lamemm7, trying full update
> do_folders(): update failed: user.lamemm7 'Replication inconsistency detected'
> Error in do_mailboxes(): bailing out!
> 
> Anyone with ideas on what to try next?
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


-- 
  Bron Gondwana
  br...@fastmail.fm

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Replication failing - IMAP_SYNC_CHECKSUM Checksum Failure

2016-08-22 Thread Tod A. Sandman via Info-cyrus
I'm using rolling replication with cyrus-imapd-2.5.9.  sync_client died and I 
am not able to get replication working again.  I've narrowed it down to one 
mailbox, user.lamemm7, and I've successfully reconstructed the mailbox on both 
replication partners with various options, such as

reconstruct -r user/lamemm7
reconstruct -r -G user/lamemm7
reconstruct -r -R user/lamemm7
reconstruct -r -V max user/lamemm7

These all seem to run fine, but still sync_client fails:

% sync_client -v -v -l -m user/lamemm7
.
<14718871101471887110>EXIT
<1471887110http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication CRC error

2016-03-06 Thread Bron Gondwana via Info-cyrus
Master has:

commit 6ef874319ecd98700e682ef30fcad5245ddfdb32 Author: Bron Gondwana
<br...@fastmail.fm> Date:   Wed Oct 22 14:51:24 2014 -0400

sync_client: do ALL mailboxes, not just all users, for -A flag


It should be pretty cherry-pickable back to 2.5 I would imagine.

Bron.

On Thu, Mar 3, 2016, at 09:17, Artyom Aleksandrov via Info-cyrus wrote:
> Hi,
>
> When using the “-A” flag (sync all users) no non-user mailboxes are
> synced. As the man page imapd.conf(5) notes, ”... this could be
> considered a bug and maybe it should do those mailboxes
> independently.” Source:
> https://cyrusimap.org/imap/admin/sop/replication.html
>
> But how in this case restore replication for shared folders? Run sync
> for each folder separately?
>
> On Wed, Mar 2, 2016, 6:01 PM Konstantin Udalov via Info-cyrus  cy...@lists.andrew.cmu.edu> wrote:
>> Hello.
>>
>>
I configured IMAP rolling replication from master to slave as was
>>
described in https://cyrusimap.org/imap/admin/sop/replication.html And
>>
everything was fine. Until some strange records appear into logs. Here
>>
is one of them.
>>
>>
Mar  2 04:48:55 hostname cyrus/sync_client[725]: MAILBOX received NO
>>
response: IMAP_SYNC_CHECKSUM Checksum Failure
>>
Mar  2 04:48:55 hostname cyrus/sync_client[725]: CRC failure on sync for
>> shared.folder.name, trying full update
>>
Mar  2 04:49:08 hostname cyrus/sync_client[725]: SYNCERROR: only exists
>>
on master shared.folder.name 286227
>>
(d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
>>
Mar  2 04:49:13 hostname cyrus/sync_client[725]: SYNCERROR: only exists
>>
on master shared.folder.name 286227
>>
(d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
>>
Mar  2 04:49:13 hostname cyrus/sync_client[725]: mailbox: longlock
>> shared.folder.name for 11.4 seconds
>>
>>
I stopped rolling replication and launched full (-A) replication. No
>>
more related records in logs, still message 286227 wasn't replicated and
>>
is missing on replica.
>>
If somebody faced same problem, help would be much appreciated.
>>
>>
cyrus-imap 2.5.3
>>

>>
Cyrus Home Page: http://www.cyrusimap.org/
>>
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>>
To Unsubscribe:
>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
> 
> Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info:
> http://lists.andrew.cmu.edu/pipermail/info-cyrus/ To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

-- 
  Bron Gondwana
  br...@fastmail.fm
 
 

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication CRC error

2016-03-02 Thread Artyom Aleksandrov via Info-cyrus
Hi,

When using the “-A” flag (sync all users) no non-user mailboxes are synced.
As the man page imapd.conf(5) notes, ”... this could be considered a bug
and maybe it should do those mailboxes independently.”
Source: https://cyrusimap.org/imap/admin/sop/replication.html

But how in this case restore replication for shared folders? Run sync for
each folder separately?

On Wed, Mar 2, 2016, 6:01 PM Konstantin Udalov via Info-cyrus <
info-cyrus@lists.andrew.cmu.edu> wrote:

> Hello.
>
> I configured IMAP rolling replication from master to slave as was
> described in https://cyrusimap.org/imap/admin/sop/replication.html And
> everything was fine. Until some strange records appear into logs. Here
> is one of them.
>
> Mar  2 04:48:55 hostname cyrus/sync_client[725]: MAILBOX received NO
> response: IMAP_SYNC_CHECKSUM Checksum Failure
> Mar  2 04:48:55 hostname cyrus/sync_client[725]: CRC failure on sync for
> shared.folder.name, trying full update
> Mar  2 04:49:08 hostname cyrus/sync_client[725]: SYNCERROR: only exists
> on master shared.folder.name 286227
> (d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
> Mar  2 04:49:13 hostname cyrus/sync_client[725]: SYNCERROR: only exists
> on master shared.folder.name 286227
> (d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
> Mar  2 04:49:13 hostname cyrus/sync_client[725]: mailbox: longlock
> shared.folder.name for 11.4 seconds
>
> I stopped rolling replication and launched full (-A) replication. No
> more related records in logs, still message 286227 wasn't replicated and
> is missing on replica.
> If somebody faced same problem, help would be much appreciated.
>
> cyrus-imap 2.5.3
> 
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Replication CRC error

2016-03-02 Thread Konstantin Udalov via Info-cyrus

Hello.

I configured IMAP rolling replication from master to slave as was 
described in https://cyrusimap.org/imap/admin/sop/replication.html And 
everything was fine. Until some strange records appear into logs. Here 
is one of them.


Mar  2 04:48:55 hostname cyrus/sync_client[725]: MAILBOX received NO 
response: IMAP_SYNC_CHECKSUM Checksum Failure
Mar  2 04:48:55 hostname cyrus/sync_client[725]: CRC failure on sync for 
shared.folder.name, trying full update
Mar  2 04:49:08 hostname cyrus/sync_client[725]: SYNCERROR: only exists 
on master shared.folder.name 286227 
(d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
Mar  2 04:49:13 hostname cyrus/sync_client[725]: SYNCERROR: only exists 
on master shared.folder.name 286227 
(d41e384300c3f13f42a546ce14ec3d23bc89f1a6)
Mar  2 04:49:13 hostname cyrus/sync_client[725]: mailbox: longlock 
shared.folder.name for 11.4 seconds


I stopped rolling replication and launched full (-A) replication. No 
more related records in logs, still message 286227 wasn't replicated and 
is missing on replica.

If somebody faced same problem, help would be much appreciated.

cyrus-imap 2.5.3

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


replication with virtual domains

2016-02-04 Thread Michael Plate via Info-cyrus

Hi,

I'm going to sync between a Gentoo (master) and a Debian 8 machine 
(replica), Gentoo on Cyrus 2.4.17, Debian on "testing" Cyrus 2.4.18 
because of broken sync in "stable" 
(https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=799724).

I am using virtual domains.

I've tried synctest from the master and it works perfectly.

Things get awful when I try to use sync_client -u or -m, I believe I 
have a mistake in the format of the user or mailbox in the command.


user:

sync_client -v -l -u forname.lastn...@domain.top

gives

USER forname^lastn...@domain.top
Error from do_user(forname^lastn...@domain.top): bailing out!


with mailbox:

sync_client -v -l -m user/forname.lastn...@domain.top

MAILBOXES domain.top!user.forname^lastname
Error from do_mailboxes(): bailing out!


sync_client -v -l -m forname.lastn...@domain.top
MAILBOXES domain.top!forname^lastname
Error from do_mailboxes(): bailing out!

I can see the connect and successful login on the replica.

Where is the mistake ?

CU

Michael



smime.p7s
Description: S/MIME Cryptographic Signature

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Marcus Schopen via Info-cyrus
Am Montag, den 14.12.2015, 07:31 -0400 schrieb Patrick Boutilier via
Info-cyrus:
> On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote:
> > Am Freitag, den 11.12.2015, 19:10 +0100 schrieb Marcus Schopen via
> > Info-cyrus:
> >> Hi,
> >>
> >> I have a problem with a single mailbox. The user's Outlook crashed and
> >> since then the sync_client is running wild on this user account and
> >> produces high load on the master. I stopped sync_client on master side
> >> for the moment.
> >>
> >> When I try to sync the user by hand
> >>
> >> /bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
> >> -u testuser
> >>
> >> I do get the following error.
> >>
> >> Dec 11 17:54:48 master cyrus/sync_client[22727]: RENAME received NO
> >> response: Operation is not supported on mailbox
> >> Dec 11 17:54:48 master cyrus/sync_client[22727]: do_folders(): failed to
> >> rename: user.elsa-secgen -> user.testuser.Archives.Gesch
> >> 2014-15.25-Jahr-Feier
> >> Dec 11 17:54:48 master cyrus/sync_client[22727]: IOERROR: The remote
> >> Server(s) denied the operation
> >> Dec 11 17:54:48 master cyrus/sync_client[22727]: Error in
> >> do_user(testuser): bailing out!
> >>
> >> Comparing master and slave on filesystem I do see the subfolder
> >> "25-Jahr-Feier" in "user.testuser.Archives.Gesch 2014-15.",
> >> but only on master but not on slave side. And why does sync_client want
> >> to rename and where does it get this order from?
> >>
> >> I can login into the users' mailbox on master side and new message are
> >> shown in the INBOX.
> >>
> >> How can I fix it?
> >>
> >> Should I try a "reconstruct -r user.testuser" on master and slave or
> >> just on slave? (do I have to shutdown cyrus for a reconstruct -r on a
> >> user box?)
> >>
> >> Or can I delete the complete mailbox on slave side start an "sync_client
> >> -S replicaserver -v -u testuser"?
> >>
> >> Thanks for helping
> >> Marcus
> >
> > I did a reconstruct on the replica whichs runs through on the 12 GB
> > mailbox ot the user within a second (too fast?).
> >
> > /bin/su - cyrus -c "/usr/lib/cyrus/bin/reconstruct -r user.testuser"
> >
> > A following sync ended up with the same error:
> >
> > /bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
> > -u testuser
> >
> > Any ideas?
> 
> No, but this seems weird. Was this user ever renamed?
> 
> user.testuser -> user.testuser.Archives.Gesch

No, sorry, forgot to hide the real user name ;)

Ciao!


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Michael Menge via Info-cyrus

Hi,


Quoting Patrick Boutilier via Info-cyrus :


On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote:

Am Freitag, den 11.12.2015, 19:10 +0100 schrieb Marcus Schopen via
Info-cyrus:

Hi,

I have a problem with a single mailbox. The user's Outlook crashed and
since then the sync_client is running wild on this user account and
produces high load on the master. I stopped sync_client on master side
for the moment.

When I try to sync the user by hand

/bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
-u testuser

I do get the following error.

Dec 11 17:54:48 master cyrus/sync_client[22727]: RENAME received NO
response: Operation is not supported on mailbox
Dec 11 17:54:48 master cyrus/sync_client[22727]: do_folders(): failed to
rename: user.elsa-secgen -> user.testuser.Archives.Gesch
2014-15.25-Jahr-Feier
Dec 11 17:54:48 master cyrus/sync_client[22727]: IOERROR: The remote
Server(s) denied the operation
Dec 11 17:54:48 master cyrus/sync_client[22727]: Error in
do_user(testuser): bailing out!

Comparing master and slave on filesystem I do see the subfolder
"25-Jahr-Feier" in "user.testuser.Archives.Gesch 2014-15.",
but only on master but not on slave side. And why does sync_client want
to rename and where does it get this order from?

I can login into the users' mailbox on master side and new message are
shown in the INBOX.

How can I fix it?

Should I try a "reconstruct -r user.testuser" on master and slave or
just on slave? (do I have to shutdown cyrus for a reconstruct -r on a
user box?)

Or can I delete the complete mailbox on slave side start an "sync_client
-S replicaserver -v -u testuser"?

Thanks for helping
Marcus


I did a reconstruct on the replica whichs runs through on the 12 GB
mailbox ot the user within a second (too fast?).

/bin/su - cyrus -c "/usr/lib/cyrus/bin/reconstruct -r user.testuser"

A following sync ended up with the same error:

/bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
-u testuser

Any ideas?


No, but this seems weird. Was this user ever renamed?

user.elsa-secgen -> user.testuser.Archives.Gesch



Cyrus uses the folder "Unique ID" for syncronisation. If this "Unique  
ID" is NOT unique

it will confuse syncronisation.





Ciao
Marcus



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus






M.MengeTel.: (49) 7071/29-70316
Universität Tübingen   Fax.: (49) 7071/29-5912
Zentrum für Datenverarbeitung  mail:  
michael.me...@zdv.uni-tuebingen.de

Wächterstraße 76
72074 Tübingen


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Replication problem do_folders(): failed to rename

2015-12-14 Thread Marcus Schopen via Info-cyrus
Hi,

Am Montag, den 14.12.2015, 12:53 +0100 schrieb Michael Menge via
Info-cyrus:
> Hi,
> 
> 
> Quoting Patrick Boutilier via Info-cyrus :
> 
> > On 12/14/2015 06:25 AM, Marcus Schopen via Info-cyrus wrote:
> >> Am Freitag, den 11.12.2015, 19:10 +0100 schrieb Marcus Schopen via
> >> Info-cyrus:
> >>> Hi,
> >>>
> >>> I have a problem with a single mailbox. The user's Outlook crashed and
> >>> since then the sync_client is running wild on this user account and
> >>> produces high load on the master. I stopped sync_client on master side
> >>> for the moment.
> >>>
> >>> When I try to sync the user by hand
> >>>
> >>> /bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
> >>> -u testuser
> >>>
> >>> I do get the following error.
> >>>
> >>> Dec 11 17:54:48 master cyrus/sync_client[22727]: RENAME received NO
> >>> response: Operation is not supported on mailbox
> >>> Dec 11 17:54:48 master cyrus/sync_client[22727]: do_folders(): failed to
> >>> rename: user.elsa-secgen -> user.testuser.Archives.Gesch
> >>> 2014-15.25-Jahr-Feier
> >>> Dec 11 17:54:48 master cyrus/sync_client[22727]: IOERROR: The remote
> >>> Server(s) denied the operation
> >>> Dec 11 17:54:48 master cyrus/sync_client[22727]: Error in
> >>> do_user(testuser): bailing out!
> >>>
> >>> Comparing master and slave on filesystem I do see the subfolder
> >>> "25-Jahr-Feier" in "user.testuser.Archives.Gesch 2014-15.",
> >>> but only on master but not on slave side. And why does sync_client want
> >>> to rename and where does it get this order from?
> >>>
> >>> I can login into the users' mailbox on master side and new message are
> >>> shown in the INBOX.
> >>>
> >>> How can I fix it?
> >>>
> >>> Should I try a "reconstruct -r user.testuser" on master and slave or
> >>> just on slave? (do I have to shutdown cyrus for a reconstruct -r on a
> >>> user box?)
> >>>
> >>> Or can I delete the complete mailbox on slave side start an "sync_client
> >>> -S replicaserver -v -u testuser"?
> >>>
> >>> Thanks for helping
> >>> Marcus
> >>
> >> I did a reconstruct on the replica whichs runs through on the 12 GB
> >> mailbox ot the user within a second (too fast?).
> >>
> >> /bin/su - cyrus -c "/usr/lib/cyrus/bin/reconstruct -r user.testuser"
> >>
> >> A following sync ended up with the same error:
> >>
> >> /bin/su - cyrus -c "/usr/lib/cyrus/bin/sync_client -S replicaserver -v
> >> -u testuser
> >>
> >> Any ideas?
> >
> > No, but this seems weird. Was this user ever renamed?
> >
> > user.testuser -> user.testuser.Archives.Gesch
> >
> 
> Cyrus uses the folder "Unique ID" for syncronisation. If this "Unique  
> ID" is NOT unique
> it will confuse syncronisation.


I connected with an imap client to the mailbox and removed all folders
on the master to which sync_client was bailing out. After that
sync_client was runnig through without problems. Strange ...

Ciao!





Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus


  1   2   3   4   5   6   7   >