Re: Huge performance problems after updating from 2.4 to 2.5.8

2016-07-15 Thread Andre Felipe Machado via Info-cyrus
Hello,
Maybe, after such upgrade, squatter metadata indexes were lost and you should 
run an incremental  squatter again on your mailboxes.
Even before the scheduled run at events section on /etc/cyrus.conf.
Regards.
Andre Felipe



Hynek Schlawack via Info-cyrus  wrote ..
> Hello,
>
> we’ve updated one of our Cyrus IMAP backends from 2.4 to 2.5.8 on FreeBSD 10.3
> with ZFS and now we have an operational emergency.
>
> Cyrus IMAPd starts fine and keeps working for about 5 to 20 minutes (rather 
> sluggishly
> tho).  At some point the server load starts growing and explodes eventually 
> until
> we have to restart the IMAP daemons which gives us another 5 to 20 minutes.
>
> It doesn’t really matter if we run `reconstruct` in the background or not.
>
>
> # Observations:
>
> 1. While healthy, the imapd daemons’s states are mostly `select` or `RUN`.  
> Once
> things get critical they all are mostly in `zfs` (but do occasionally switch).
> 2. Customers report that their mail clients are downloading all e-mails.  
> That’s
> obviously extra bad given we seem to run in some kind of I/O problems.  
> Running
> `truss` on busy imapd processes seem to confirm that.
> 3. Once hell breaks loose, IO collapses even on other file systems/hard disks.
> 4. `top` mentions processes in `lock` state – sometimes even more than 200.
> That’s nothing we see on our other backends.
> 5. There seems to be a correlation between processes hanging in `zfs` state 
> and
> `truss` showing them accessing mailboxes.db.  Don’t know if it’s related, but
> soon after the upgrade, mailboxes.db broke and we had to reconstruct it.
>
>
> # Additional key data:
>
> - 25,000 accounts
> - 4.5 TB data
> - 64 GB RAM, no apparent swapping
> - 16 cores CPU
> - nginx in front of it.
>
> ## zpool iostat 5
>
>capacity operationsbandwidth
> poolalloc   free   read  write   read  write
> --  -  -  -  -  -  -
> tank4.52T   697G144  2.03K  1.87M  84.2M
> tank4.52T   697G 84730  2.13M  3.94M
> tank4.52T   697G106904  2.78M  4.52M
> tank4.52T   697G115917  3.07M  5.11M
> tank4.52T   697G101   1016  4.04M  5.06M
> tank4.52T   697G124  1.03K  3.27M  6.59M
>
> Which doesn’t look special.
>
> The data used to be on HDDs and worked fine with an SSD ZIL.  After the 
> upgrade
> and ensuing problems we tried a Hail Mary by replacing the HDDs thru SSDs to 
> no
> avail (migrated a ZFS snapshot for that).
>
> So we do *not* believe it’s really a traditional I/O bottleneck since it only
> started *after* the upgrade to 2.5 and did not go away by adding SSDs.  The 
> change
> notes led us to believe that there shouldn’t be any I/O storm due to mailbox
> conversions but is it true in any case?  How could we double check?  
> Observation
> #2 from above leads us to believe that there are in fact some meta data 
> problems.
> We’re reconstructing in the background but that’s going to take days; which
> is sadly time we don’t really have.
>
> ## procstat -w 1 of an active imapd
>
>   PID  PPID  PGID   SID  TSID THR LOGINWCHAN EMUL  COMM
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor - FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor - FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor - FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor *vm objec FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor - FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor zfs   FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor - FreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor selectFreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor selectFreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor selectFreeBSD ELF64 imapd
> 45016 43150 43150 43150 0   1 toor select   

Re: [cyrus 3.0] 20 delayed mailbox deleted limit?

2016-06-10 Thread Andre Felipe Machado via Info-cyrus
Bron Gondwana via Info-cyrus  wrote ..
> On Fri, Jun 10, 2016, at 09:41, Jason L Tibbitts III wrote:
> > > "BG" == Bron Gondwana  writes:
> > 
> > BG> Just to be really clear what this is.  It's per mailbox name - if
> > BG> you create and delete the SAME mailbox more 20 times, it only keeps
> > BG> the most recent 20 of that mailbox.
> > 
> > Hmm.  That's much less problematic, but it still allows someone to force
> > something to be deleted if they really want it to be deleted.  That's
> > not really an issue for me because my users wouldn't figure it out, but
> > I can imagine that someone using delayed expiry to easily implement some
> > sort of legal requirement might be unhappy.  But that's somewhat of a
> > stretch.
> 
> Yep.
> 
> Anyway, magic numbers are bad, so I will make this configurable.  It's easy to
> do, and if people with different systems need it changed, then that's fine.
> 
> With uniqueid based storage it will all be nicer anyway :)
> 
> Bron.
> 
> 
> -- 
>   Bron Gondwana
>   br...@fastmail.fm
> 

Cheers, Bron.
configurable on imapd.conf .
But I guess it is still not enogh to protect against the DoS / waste space you 
cited.
Your ideas of 2 quotas and having means to also control individual total quota 
is better suited for these tasks.
Are there better ideas?
Regards.
Andre Felipe


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: [cyrus 3.0] 20 delayed mailbox deleted limit?

2016-06-09 Thread Andre Felipe Machado via Info-cyrus
Andrew Morgan <mor...@orst.edu> wrote ..
> On Thu, 9 Jun 2016, Andre Felipe Machado via Info-cyrus wrote:
> 
> > Bron Gondwana via Info-cyrus <info-cyrus@lists.andrew.cmu.edu> wrote ..
> >> On Thu, Jun 9, 2016, at 03:02, Andre Felipe Machado via Info-cyrus wrote:
> >>> Hello,
> >>> At future release notes I read
> >>> "Under delete_mode: delayed, only the 20 most recently deleted mailboxes 
> >>> are
> >> kept for any given name."
> >>> https://cyrusimap.org/imap/release-notes/3.0/x/3.0.0-beta2.html
> >>> Is there any configuration parameter to increase this limit?
> >>> Why this limit is needed?
> >>
> >> denial of service / space wastage protection.  There's no config option 
> >> available
> >> right now.  I could be convinced to change it.
> >>
> >> How would you suggest we protect against exploiting delayed delete to fill 
> >> the
> >> server without going over quota?  Maybe a new quota field for "total 
> >> mailbox
> usage
> >> including deleted stuff" that can be set to a high enough value that no 
> >> reasonable
> >> user will ever hit it?
> >>
> >> Bron.
> >>
> >> --
> >>   Bron Gondwana
> >>   br...@fastmail.fm
> >> 
> >
> > Hello, Bron
> > I understand the problem.
> > But at a corporate scenario, it is a rare event, because of jobs at stake, 
> > tracked
> user accounts,  antispam measures, etc.
> > It is more likely a "rogue" client,  bug/misconfiguration on a smartphone 
> > causing
> such problems.
> > We stay with official debian repositories versions as long as we could, 
> > receiving
> security patches.
> > So, mantaining an unofficial patch will be a big problem.
> > The sysadmin configurable parameters will be a more elegant solution.
> > Having configurations at sysadmin control will mantain cyrus flexible for 
> > use
> at different usage scenarios.
> > For the DoS / waste space problems, the 2 quota limits configurations are 
> > more
> suitable than counting folders quantity.
> > What if each folder contains 1 TB deleted messages?
> > Maybe a reasonable default (10 times user quota?) for those not wanting to 
> > configure
> is good idea.
> > Even better to have also a way to control individual accounts total quotas, 
> > for
> those corporate accounts like "sa...@foo.bar" that  receive lots of legitimate
> emails and have to
> > delete them after processing.
> > We have zabbix monitoring space at our cyrus backends, and need unlimited  
> > or
> configurable delayed expunge limits for recovering messages and folders for 
> years
> at corporate
> > scenario.
> > Thanks .
> > Andre Felipe
> 
> Remember, this is a limit on the number of deleted *mailboxes* kept, not 
> messages.
> 
> Bron, this could impact Pine/Alpine users that frequently postpone 
> messages.  Pine creates a folder named "postponed-msgs" to store drafts. 
> The folder is created when a draft is saved and deleted when all drafts 
> have been deleted/sent.
> 
> Here is my personal deleted folders list, right now:
> 
> DELETED.user.morgan.postponed-msgs.5755CF0C 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F446 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F486 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F4D1 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F4E4 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F50E 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F65F 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5755F844 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5756ECFC 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.5756F602 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.575706F8 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.57585C5D 0 p2 morgan lrswipkxtecda
> DELETED.user.morgan.postponed-msgs.57587FE1 0 p2 morgan lrswipkxtecda
> 
> We are removing deleted mailboxes after 7 days:
> 
> delprune  cmd="/usr/local/cyrus/bin/cyr_expire -E 1 -X 7 -D 7" at=0100
> 
> 
> I don't know if other IMAP clients have similar quirky behavior, but I 
> could see myself running into this limit.  However, I certainly don't care 
> about recovering my old postponed-msgs mailbo

Re: [cyrus 3.0] 20 delayed mailbox deleted limit?

2016-06-09 Thread Andre Felipe Machado via Info-cyrus
Bron Gondwana via Info-cyrus <info-cyrus@lists.andrew.cmu.edu> wrote ..
> On Thu, Jun 9, 2016, at 03:02, Andre Felipe Machado via Info-cyrus wrote:
> > Hello,
> > At future release notes I read
> > "Under delete_mode: delayed, only the 20 most recently deleted mailboxes are
> kept for any given name."
> > https://cyrusimap.org/imap/release-notes/3.0/x/3.0.0-beta2.html
> > Is there any configuration parameter to increase this limit?
> > Why this limit is needed?
> 
> denial of service / space wastage protection.  There's no config option 
> available
> right now.  I could be convinced to change it.
> 
> How would you suggest we protect against exploiting delayed delete to fill the
> server without going over quota?  Maybe a new quota field for "total mailbox 
> usage
> including deleted stuff" that can be set to a high enough value that no 
> reasonable
> user will ever hit it?
> 
> Bron.
> 
> -- 
>   Bron Gondwana
>   br...@fastmail.fm
> 

Hello, Bron
I understand the problem.
But at a corporate scenario, it is a rare event, because of jobs at stake, 
tracked user accounts,  antispam measures, etc.
It is more likely a "rogue" client,  bug/misconfiguration on a smartphone 
causing such problems.
We stay with official debian repositories versions as long as we could, 
receiving security patches.
So, mantaining an unofficial patch will be a big problem.
The sysadmin configurable parameters will be a more elegant solution.
Having configurations at sysadmin control will mantain cyrus flexible for use 
at different usage scenarios.
For the DoS / waste space problems, the 2 quota limits configurations are more 
suitable than counting folders quantity.
What if each folder contains 1 TB deleted messages?
Maybe a reasonable default (10 times user quota?) for those not wanting to 
configure is good idea.
Even better to have also a way to control individual accounts total quotas, for 
those corporate accounts like "sa...@foo.bar" that  receive lots of legitimate 
emails and have to 
delete them after processing.
We have zabbix monitoring space at our cyrus backends, and need unlimited  or 
configurable delayed expunge limits for recovering messages and folders for 
years at corporate 
scenario.
Thanks .
Andre Felipe

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

[cyrus 3.0] 20 delayed mailbox deleted limit?

2016-06-08 Thread Andre Felipe Machado via Info-cyrus
Hello,
At future release notes I read
"Under delete_mode: delayed, only the 20 most recently deleted mailboxes are 
kept for any given name."
https://cyrusimap.org/imap/release-notes/3.0/x/3.0.0-beta2.html
Is there any configuration parameter to increase this limit?
Why this limit is needed?
Regards.
Andre Felipe

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

[cyrus 3.0 ]mupdate on openio or not?

2016-06-07 Thread Andre Felipe Machado via Info-cyrus
Hello,
I was trying to understand code of future cyrus imap 3.0 and realized that 
mupdate master will not use openio.
Is it correct?
Also, the various databases will not be on openio, only the messages 
theirselves. Is it correct too?
Could  the performance and horizontal scalability improve using them on openio?
Regards.
Andre Felipe

Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

Re: Archiving

2016-05-10 Thread Andre Felipe Machado via Info-cyrus




Hello,

AFAIK, the one with potential to scale is https://www.enkive.org

Regards.

Andr� Felipe





Paul Bronson via Info-cyrus  wrote ..  Any 
good open source archiving software out there?



Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus