Re: Disable reverse DNS lookups
Le 12/03/2012 20:01, Bron Gondwana a écrit : Yes, it looks like a great idea. The problem I had with the patch is that it only does imapd and pop3d, but there are other daemons that do reverse lookups. I think I'd like to factor out the reverse lookup into one centralised function. This would be a wonderful solution. I liked very much how the cyrusdb-Functions are realised to acces skiplist and other db backends, something like that for dns or network in general would ne very nice. Pascal -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: ZFS doing insane I/O reads
Le 28/02/2012 07:13, Ram a écrit : This is a 16GB Ram server running Linux Centos 5.5 64 bit. There seems to be something definitely wrong .. because all the memory on the machine is free. (I dont seem to have fsstat on my server .. I will have to get it compiled ) ZFS as FUSE? We have Solaris 10 on x86(amd64) and we noticed that ZFS needs _RAM_, the more, the better. On Solaris, using mdb you can look at the memory consumption (in pages of physical memory): bash-3.2# mdb -k Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp cpu.generic zfs sockfs ip hook neti sctp arp usba fcp fctl qlc lofs sata fcip random crypto logindmux ptm ufs mpt mpt_sas ] ::memstat Page SummaryPagesMB %Tot Kernel6052188 23641 36% ZFS File Data 4607758 17999 27% Anon 2115097 8262 13% Exec and libs6915270% Page cache 82665 3220% Free (cachelist) 433268 16923% Free (freelist) 3477076 13582 21% Total16774967 65527 Physical 16327307 63778 As this is early in the morning, there are plenty of free pages in RAM (4 million), and the memory mapped executables of Cyrus IMAPd and shared libraries only consume 6915 pages, 27 MB. 1779 connections at this moment. We had to go from 32 GB to 64 GB per node due to extreme lags in IMAP spool processing. And even with 64 GB when memory pressure from the Kernel and Anon (mapped pages without an underlying file: classical malloc() or mmap mapped on /dev/zero after COW) there are light degradations in access times on high volume hours. Another idea we had was the usage of a fast SSD as Layer 2 ARC (L2ARC) named cache on the zpool command line, based on the lru algorithm at the end the blocks containing the cyrus.*-files should be there. The problem lies in the fact that a pool with a local cache device and remote SAN (FiberChannel) storage won't be able to be imported automatically on another machine without replacing the faulty device. And for the price of an FC-enabled SSD you can buy MUCH RAM. Does your CentOS system have some kind of trace to look for the block numbers which are read constantly? In Solaris I use dtrace to look for that and also for file based i/o to look WHICH files get read and written when there is starvation. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Missing folders after upgrade from 2.3.16 to 2.4.13
We had a strange issue, and it has to do with spaces in mailbox names. Mailboxes are on disk, they are encoded, but they don't get counted on ctl_mboxlist -d or doing a LIST or LSUB. After copying the mailbox tree to another one and changing the spaces to _ and doing reconstruct (so that it finds the new mailboxes) all is well. Doing the same without replacing the Spaces with _ the effect is the same: reconstruct finds all new mailboxes but even if they are in mailboxes.db they don't get listed. Is this a bug? -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Missing folders after upgrade from 2.3.16 to 2.4.13
Le 30/01/2012 12:57, Pascal Gienger a écrit : We had a strange issue, and it has to do with spaces in mailbox names. Mailboxes are on disk, they are encoded, but they don't get counted on ctl_mboxlist -d or doing a LIST or LSUB. I found this on the web: http://blog.webworm.org/content/cyrus-2413-some-gotchas -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Missing folders after upgrade from 2.3.16 to 2.4.13
Le 30/01/2012 12:57, Pascal Gienger a écrit : We had a strange issue, and it has to do with spaces in mailbox names. Mailboxes are on disk, they are encoded, but they don't get counted on ctl_mboxlist -d or doing a LIST or LSUB. Correction; the ctl_mboxlist -d shows then, but neither LIST nor LSUB: Example: user..UNI.Seminare 0 default lrswipkxtecda user..UNI.Seminare 20110 default lrswipkxtecda user..UNI.Seminare 2011.DGS0 default lrswipkxtecda user..UNI.Seminare 2011.Logik 0 default lrswipkxtecda user..UNI.Seminare 2011.Varieties of English 0 default lrswipkxtecda user..UNI.Seminare.IslAOQ-ndisch 0 default lrswipkxtecda user..UNI.Seminare.Phonetik II 0 default lrswipkxtecda user..UNI.Seminare.Pragmatik I 0 default lrswipkxtecda user..UNI.Seminare.SLI 0 default lrswipkxtecda user..UNI.Seminare.Semantik II 0 default lrswipkxtecda user..UNI.Seminare.Soziolinguistik 0 default lrswipkxtecda user..UNI.Seminare.Soziolinguistik.Projekt 0 default lrswipkxtecda Even after a rebuild of mailboxes.db using ctl_mboxlist -u mydumpfile the LIST does not show them: . OK [CAPABILITY IMAP4rev1 LITERAL+ ID ENABLE ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY CATENATE CONDSTORE ESEARCH SORT SORT=MODSEQ SORT=DISPLAY THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE LIST-EXTENDED WITHIN QRESYNC SCAN XLIST URLAUTH URLAUTH=BINARY X-NETSCAPE LOGINDISABLED COMPRESS=DEFLATE IDLE] User logged in SESSIONID=uni-konstanz.de-11040-1327925587-1 . list user. * [...] * LIST (\HasNoChildren) . user..UNI.Seminare * LIST (\HasChildren) . user..UNI.Seminare 2011 * LIST (\HasNoChildren) . user..UNI.Seminare 2011.DGS * LIST (\HasNoChildren) . user..UNI.Seminare 2011.Logik * LIST (\HasNoChildren) . user..UNI.Seminare 2011.Varieties of English [...] That's all what cyrus finds with the LIST command. The mailbox directories are correct on disk. The hierarchy Seminare.* is missing completely. Pascal -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Sorry, that bug is already known...[#3628]
Found it in Bugzilla [#3628] Confirmed here. Direct access with SELECT or GETACL works without a problem, but a LIST just omits these mailboxes. Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Why mbpath (2.4.13) only honors metapath?
Perhaps it was forgotten, but the -m flag is not honored and it always shows the meta path. Patch would be very easy so I am asking whether I have missed something... bash-3.2# diff -u mbpath.c.orig mbpath.c --- mbpath.c.orig Fri Dec 30 22:19:18 2011 +++ mbpath.cThu Jan 12 10:06:01 2012 @@ -95,6 +95,7 @@ int opt; /* getopt() returns an int */ char *alt_config = NULL; char buf[MAX_MAILBOX_PATH+1]; + char *path; if ((geteuid()) == 0 (become_cyrus() != 0)) { fatal(must run as the Cyrus user, EC_USAGE); @@ -137,7 +138,10 @@ if (mbentry.mbtype MBTYPE_REMOTE) { printf(%s\n, mbentry.partition); } else { - char *path = mboxname_metapath(mbentry.partition, mbentry.name, 0, 0); +if (metadata) + path = mboxname_metapath(mbentry.partition, mbentry.name, 0, 0); +else + path = mboxname_datapath(mbentry.partition, mbentry.name, 0); printf(%s\n, path); } } else { -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Questions before upgrading vom 2.3 to 2.4
Le 06/01/2012 17:50, Bron Gondwana a écrit : On Fri, Jan 06, 2012 at 11:04:26AM +0100, Sebastian Hagedorn wrote: With meta-partition on a fast SSD device this even would not have occured (500 GB SSD needed in our case) We don't have any of those, unfortunately ... As nice as SSDs are, they wouldn't have helped - the IO hit is reading every single message file from the spools to rebuild cache and index files with accurate checksums. I'm not convinced it was the best idea (after the troubles people have had with it)... but the flip side would have been much more complex code paths for handling incomplete checksum data :( It would, because our new storage which will arrive hopefully end 1st quarter will have SATA, SAS and SSD with automatic and manual migration. So before the upgrade I would have migrated all the mail spool on the SSD area and yes, it will have enough space for it. If not, I would have moved it to SAS and decent cache size. And I would not have had any downtime because due to the storage virtualization WWPNs would not have changed - it would have been transparent to my Solaris hosts. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 G+: https://plus.google.com/114525323843315818983/ Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Questions before upgrading vom 2.3 to 2.4
Le 05/01/2012 17:00, Sebastian Hagedorn a écrit : Hi, we're currently running a rather outdated configuration, with Cyrus 2.3.14 and RHEL 3 i386 on hardware that's almost 7 years old. I'm planning an upgrade to 2.4 with RHEL 5 x86_64 and I have a umber of questions regarding new features that I played around with using 2.4.13. I've searched the archives and found some answers, but not all. In principle it is simple, but I hit a hard ground today. I did the mistake to miscalculate the converting indexes Our storage was perfectly sized to handle the needed IOPS for cyrus operation, but having 8000 index files being rebuild simultaneosuly... Important: Set lmtp_mail_timeout (when you use postfix to deliver mail) to a low value, like 2seconds, because also cyrus lmtpd triggers the index rebuild of a INBOX. So people with less mail in their INBOX (which already constructed index and cache) still get their mail in their INBOX, and people having MANY mails so that the index rebuild takes time will get the mail later due to the early deferred state (after lmtp_mail_timeout). Otherwise you'll have many blocked Postfix lmtp and no mail gets transferred. I tackled the situation like this, mail is being delivered normally now and there are 5-10 index rebuilds running only which are going fast and without noticeable delay. People having large mailboxes will get the mail on the 2nd queue run of Postfix (set the delay not too high!). On our test system all worked so well, with many connections, much load to the limit, and no errors - but only on a few mailboxes so index rebuild was little compared to the real case. It was my fault, my error. Should not have happened, but ok, no mail was lost. And be familiar with expunge_mode: delete_mode: expunge_days: and the consequences on cyr_expire! With meta-partition on a fast SSD device this even would not have occured (500 GB SSD needed in our case) Lucky eye: Cyrus 2.4.13 is somewhat faster with our SOGo system (i presume STATUSCACHE). Crying eye: Some people suggest moving to Google or Microsoft because that would not cause delays at an upgrade like this... Should you use SOGo other other webmailers, please set flushseenstate: 1 statuscache: 1 Put your cyrus data directory + your meta partition(s) on a fast LUN/Volume, and/or have much RAM for your file system cache (ARC in my case). -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Questions before upgrading vom 2.3 to 2.4
Le 05/01/2012 21:06, Bron Gondwana a écrit : On Thu, Jan 05, 2012 at 07:50:28PM +0100, Pascal Gienger wrote: flushseenstate: 1 Doesn't do anything any more, I'm pretty sure. In fact - just checked. It's still in the imapd.conf docs, but it doesn't do anything. I'm going to delete that now. Bron ( seen state is now in the cyrus.index for the mailbox owner, and it gets flushed always for everyone else ) Ok thank you. At least 2.4.13 works with SOGo without the problem that seen messages just reappear as unseen. 2.3.x had this ugly behaviour in companion with SOGo. Nice work this 2.4. Thank you very much! -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Cyrus imap server and filesystem type.
Le 03/10/2011 23:09, Vincent Fox a écrit : On 10/03/2011 12:58 PM, Josef Karliak wrote: Hi there, what filesystem type do you use for Cyrus imapd ? I use SLES11x64 (or opensuse 11.4). I use Reiserfs3.6, so far so good. But couldn't be better ? :) ZFS, which unfortunately is not much of an option for you Linux folks I think. ZFS works great with thousands of users, no worries about getting inodes or partitions right and snapshots make keeping weeks of recovery points online in the pool trivial and cheap I second this. Roughly 51,000,000 files on one (mirrored) multipathed FiberChannel SAN volume with no performance bottlenecks. 64 GB RAM per node, approx 40 GB ARC (ZFS Cache). Solaris 10u9 Kernel 147441-03 64bit x64 -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Stability of idled
Just a little remark: We have 4000 connections in IDLE state(*) and idled works flawlessly. I had some headaches because some members of this list pointed out that imapd was not tested for thousands of connections. It justs works. Cyrus IMAPD 2.3.19. (*) due to a heavily migration to Thunderbird including Lightning (connecting to our SOGo servers) in many of the faculty and professors' secretaries. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Stability of idled
Le 15/08/2011 17:28, David R Bosso a écrit : This is useful information, I was going to send out a query on idled experience because I recall that message a while ago stating that it was not well tested. Are there others using it on a large scale? A snapshot of our load: 1. memstat; # bash bash-3.2# mdb -k Loading modules: [ unix krtld genunix specfs dtrace cpu.generic uppc pcplusmp zfs ip hook neti sctp arp usba fcp fctl qlc lofs sata fcip md random crypto logindmux ptm ufs mpt mpt_sas ] ::memstat Page SummaryPagesMB %Tot Kernel5205685 20334 31% ZFS File Data 6962138 27195 41% Anon 2238094 8742 13% Exec and libs7470290% Page cache 154863 6041% Free (cachelist) 1583461 61859% Free (freelist)623257 24344% Total16774968 65527 Physical 16327309 63778 2. imapd cpu time: cyrus 1122 1 0 Jun 09 ? 80:28 idled Server uptime: 5:45pm up 89 day(s), 39 min(s), 1 user, load average: 1.46, 1.47, 1.43 NPROC USERNAME SWAP RSS MEMORY TIME CPU 2800 cyrus 13G 10G16% 5:59:39 1.8% At the moment there are semester vacancies, student lecture times will rebegin in October, so load is lesser. And it is 5:46pm. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Scaling of imap servers
Am 16.06.11 15:26, schrieb Ramprasad A.P: Using outsourced mail is not possible Amazon cloud *IS* outsourced mail, Amazon has access to your virtual servers and it passes without VPN into their network. The biggest problem is harddisk space Every user is looking for huge amounts of diskspace even if I need 20GB per user I cant get so much disk space at affordable cost. 20K users with 20G each = 400 TB. With zfs compression perhaps 300 TB. How much does it cost to use 300T redundant fault-tolerant storage in an Amazon Cloud? Google Mail does use your text patterns in your mail for advertising. That's the way you're paying them. And they don't restore your mailbox unless you use a business contract which means costs for each mailbox. There's no such thing as a free lunch. Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Ever growing mailboxes and archiving
Vincent Fox vb...@ucdavis.edu wrote: On 4/20/2011 12:50 AM, Rudy Gevaert wrote: Hi Vincent, How do you make the snapshot consistent (netbackup+vmware snapshot)? Do you stop cyrus? I'll have to ask our backup admin if you want technical specifics and guarantees, but my understanding was that the process included an atomic snapshot of the VM. Then the backup is made of that. We don't stop anything it's done hot. As I said, we are not presently running Cyrus in a VM, Just a little side mark: As my memory serves me right, you seem to be running Cyrus in a Solaris environment using the ZFS file system. Do you want to stop using it? Solaris in VMWARE never performed really well, and seemless migration from one vmware node to another is also not possible without proper vmware tools. My tests with Solaris in a VM environment (other than Sun xVM/Xen) were horrible - ZFS' performance went down due to noticeable delays due to the too many caches between-error. The Cyrus mail store is one of the left servers not being migrated to our VM infrastructure due to ZFS. We are upgrading hardware to have 64 GB of RAM on our Cyrus store because ARC cache is like having plain gold for performance. I am left to have own iron. :) Backups are done via ZFS snapshots. All VM backup systems being able to do live snapshots and backups are based on the fact that they know the on-disk-structure of the filesystem the virtual disk is formatted with. ZFS is a no go in all these setups. Other systems rely on the information which blocks changed - this setup together with thin provisioning is useless with ZFS because due to the ZIL sequential writes/copy on write the thin provisioning will result in a all blocks used-disk after doing X TB of writes on a X TB disk volume - even if it is on the same file overwritten again and again. And transferring our Cyrus Store to something other than ZFS is just out of reach. We have 50,000,000 files on the store in 102,000 mailboxes. No problem for ZFS, enough RAM for ARC (and/or good SSD for L2ARC!) provided. I would be very lucky to know other peoples experiences with Cyrus in virtualized environments. Solaris works well with Sun xVM, sure - but since Oracle has bought Sun, I don't have any possibility left to get actual copies of it so I deleted quite all installation of it. Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Shared mailboxes doc
Am 14.12.10 20:18, schrieb Adam Tauno Williams: On Tue, 2010-12-14 at 14:14 -0500, Julien Vehent wrote: Hi list, I was experimenting with shared mailboxes today. That's something new I've never used before. I wrote a wiki page of my setup on debian squeeze (still on cyrus 2.2) and was wondering if that was the state of the art, or if there were any better way to do it. http://wiki.linuxwall.info/doku.php/en:ressources:dossiers:cyrus:shared_mailbox Also, I didn't find any documentation on the subject on the website. Did I miss something ? Nope, that's about it. They are very simple, and simple to use [once you manage to get *users* to understand them - which is the hard part]. We have rolled out SOGo as web-based groupware with email calendaring and contacts for 13,000 users and it uses our Cyrus IMAP server structure as backend for messages (using as session id obfuscator[1] as this was not implemented in time of rollout, but it will be in version 1.3.5). SOGo offers a sharing option and our users seem to understand IMAP access rights with this web front end so we're happy. IMAP ACLs remain abstract until a decent user interface appears. Users _LOVE_ this feature but nearly nobody knows it because most IMAP clients cannot set them. [1] http://southbrain.com/south/2010/10/session-management-for-sogo.html -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Does anyone allow unlimited or extremely large quotas?
Le 19 nov. 2010 à 17:48, Michel Sébastien a écrit : Is mmap still efficient ? map a gigabit file should cost a lot of I/O and a relatively long reponse time to just access the records of the most recent emails. mmap does nothing besides mapping the file as virtual memory to your process. Read requests on memory addresses within the mmap range yield in page-ins of pages of the mmap'd file - it behaves like swap space. if you write in this mmap'd virtual memory range then you'll trigger a pageout when the system's vm system thinks it's time to write the page out (or you unmap the file). This is a very efficient way to access a file. So mapping a Gigabyte file does not need much i/o - and 10 read accesses result in a maximum of 10 pages read. The file (or the offset given with mmap() ) must fit in the process vm, so mmap'd files can be much bigger in size on 64 bit platforms. Pascal Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Does anyone allow unlimited or extremely large quotas?
Am 16.11.10 19:08, schrieb Wesley Craig: On 16 Nov 2010, at 10:32, Joseph Brennan wrote: I wish we'd somehow financed a native Cyrus webmail interface, that is not using IMAP but built into Cyrus. I don't think users know how good Cyrus is because they look at it through a weak intermediary. I don't think a Cyrus-specific web interface is the answer to that question. IMP performance is not great, but it's the http paradigm that slows it. Check out roundcube, utilizing AJAX it's way more responsive to the user. We started using SOGo here. It just loads a reasonable number of mail items for the index view and continues to load when you scroll down (or up) to get more. As for the FS, we still use Sun aaah Oracle ZFS. Mailboxes with 500,000 messages (postfix mailing list :-) ) are just as SELECTable as empty mailboxes - no difference in speed or access time when retrieving messages from there. The smell of Oracle still gets bitter and bitter compared to Sun, but especially this cookie (zfs) still tastes too well. SOGo is slower (as it has another paradigma as Horde/IMP or Squirrel) but our users seems not to have a problem with it. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: packages for solaris 10 x86
Am 14.10.10 05:37, schrieb Frank Pittel: On Sat, Oct 09, 2010 at 11:33:51AM -0700, Andrew Morgan wrote: On Sat, 9 Oct 2010, Frank Pittel wrote: That helped but I then I ran into more troubles. I'm starting to wonder if it'll work when I do get it compiled! The errors I'm getting now is: Comment out the perl targets in Makefile. Solaris perl was built with Sun C Compiler, so build and compile options from Sun (Oracle) C were used. You have two possibilities now: Use a gcc compiled perl (from Sun Freeware or doing it on your own) and set thhis perl's path in front of your search path. It will build the Cyrus PERL Modules in your own perl's module path then. Or just comment out the perl targets in your Makefile to skip perl module builds. You will lose cyradm though. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: packages for solaris 10 x86
Am 09.10.10 19:51, schrieb Frank Pittel: krb5_free_principal ../lib/libcyrus.a(auth_krb5.o) krb5_realm_compare ../lib/libcyrus.a(auth_krb5.o) krb5_build_principal../lib/libcyrus.a(auth_krb5.o) krb5_get_default_realm ../lib/libcyrus.a(auth_krb5.o) krb5_parse_name ../lib/libcyrus.a(auth_krb5.o) krb5_init_context ../lib/libcyrus.a(auth_krb5.o) krb5_free_context ../lib/libcyrus.a(auth_krb5.o) krb5_unparse_name ../lib/libcyrus.a(auth_krb5.o) GSSAPI needs Kerberos Libs. You don't have them. Has anyone been able to get cyrus-imap to compile under solaris-10? If I sound frustrated it's because I am. :-( Sure. For proper 64 bit code to be built I used export CC=gcc export CXX=g++ export CFLAGS=-m64 -O3 -march=k8 export CXXFLAGS=-m64 -O3 -march=k8 export CPPFLAGS=-m64 -I/usr/local/test/64/include export LDFLAGS=-m64 -Wl,-64 -L/usr/local/test/64/lib -Wl,-R/usr/local/test/64/lib -L/usr/local/lib/amd64 -Wl,-R/usr/local/lib/amd64 (We have Opteron servers here). On my test system here the build configure was: ./configure --prefix=/usr/local/test/64 --with-openssl --with-ldap=/usr/local/test/64 --with-sasl=/usr/local/test/64 --with-bdb=/usr/local/test/64 --enable-netscapehack --with-cyrus-prefix=/usr/local/test/64/cyrus --sysconfdir=/usr/local/test/64/etc/cyrus --enable-listext --with-snmp=no --disable-snmp --with-idle=idled --enable-idled --enable-replication --enable-gssapi=no used libraries by imapd (as an example): bash-3.00# ldd /usr/local/test/64/cyrus/bin/imapd libsasl2.so.2 = /usr/local/test/64/lib/libsasl2.so.2 libssl.so.0.9.8 = /usr/local/test/64/lib/libssl.so.0.9.8 libcrypto.so.0.9.8 =/usr/local/test/64/lib/libcrypto.so.0.9.8 libresolv.so.2 =/lib/64/libresolv.so.2 libsocket.so.1 =/lib/64/libsocket.so.1 libnsl.so.1 = /lib/64/libnsl.so.1 libdb-4.5.so = /usr/local/test/64/lib/libdb-4.5.so libz.so.1 = /usr/lib/64/libz.so.1 libmd.so.1 =/lib/64/libmd.so.1 librt.so.1 =/lib/64/librt.so.1 libc.so.1 = /lib/64/libc.so.1 libdl.so.1 =/lib/64/libdl.so.1 libmp.so.2 =/lib/64/libmp.so.2 libscf.so.1 = /lib/64/libscf.so.1 libpthread.so.1 = /lib/64/libpthread.so.1 libgcc_s.so.1 = /usr/local/lib/amd64/libgcc_s.so.1 libaio.so.1 = /lib/64/libaio.so.1 libdoor.so.1 = /lib/64/libdoor.so.1 libuutil.so.1 = /lib/64/libuutil.so.1 libgen.so.1 = /lib/64/libgen.so.1 libm.so.2 = /lib/64/libm.so.2 Due to some problems with Sun's OpenSSL package I used my own one. SASL is also linked to that. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: high-availability Cyrus (i.e. glusterfs)?
Le 28 sept. 2010 à 08:50, Tomasz Chmielewski a écrit : Sep 28 01:10:10 omega cyrus/ctl_cyrusdb[21728]: DBERROR db4: Program version 4.2 doesn't match environment version Are you sure on each node the _SAME_ Cyrus version linked to the _SAME_ bdb libs is running? And - just a little side note - you can dump bdb in favor to skiplist... I bet you'll have much less problems in your cluster environment setup. Pascal Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: competition
Le 20 sept. 2010 à 15:59, Marc Patermann a écrit : Hi, where does Cyrus IMAPd stand today? When I was starting to think about moving to a open source mail system (migrating away from Lotus Domino btw.), there ware Cyrus IMAPd, Courier and UW-IMAP I think. Cyrus was the only full flavored IMAP server with active development. We were going the 2.2. path, while 2.3 seemed to fresh. So there was development. On the other side there were still many people complaining about Cyrus being too complex and too unstable with all the BDB fiddlings. Then dovecot emerged and quickly evolved. I don't know why, I begin to be tired from this dovecot is much more besser, you HAVE TO USE IT, why don't you migrate, ... ...? For me, this is a typical open source crowd behaviour. If one product tends to have better results, change immediately on the faster running train (which is believed to run faster) and to dump the reliable solution immediately, beginning to do advocacy in all forums, newsgroups and discussion groups to tell people that the new product is even better than sliced bread. Today it is dovecot, tomorrow it can be insert new name of a superb new blinkenlights imap server project here. This kind of advocacy kills capacities, kills work time (because you have to tell to your boss why it is not a good idea to dump the existing installation only because another linux guy plotted out some coloured powerpoint slides stating that dovecot is the way to go) and focusses on the wrong side of the story. Cyrus has been proven to be reliable for over ten years. The security record for Cyrus is quite high. It runs perfectly on our infrastructure. It is still actively developed. Until now, it does not hurt any performance barrier and staff is trained with it. Just to change because numbers seem to be better or an IMHO arrogant founder offers 1000 euros if you can hack my server (which is like being in a kindergarden from my point of view) is not a viable option. Image the cost of training staff to a new system. And even a easy migration path often turns out to get downtime. Downtime - for nothing you get in plus - no service will be better - it is far worse: as your staff isn't experienced with dovecot you likely will do more errors in administering ist. As for a new installation, that can be another case. We are very satisfied with the performance and flexibility Cyrus IMAP gives to us. There's no need to change apart from being in head with the open source advocacy croud. -- Pascal Gienger Jabber/XMPP/Mail: pascal.gien...@uni-konstanz.de University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: imapd dumping core due to SEGV
For Solaris SMF and Cyrus please use in your manifest for Cyrus IMAP: property_group name='startd' type='framework' propval name='ignore_error' type='astring' value='core,signal'/ /property_group The imap service will not be restarted when an imap process is killed anymore. Only when master ends the startd will believe Cyrus is down. Dito for an imap process dumping core. Pascal Gavin Gray gavin.g...@ed.ac.uk a écrit : Sorry for the delay getting back about this, I meant to let people know that the reason for this: Also when this happens the cyrus master process kills all other active imapd processes and restarts, is there a reason for this? I've never heard of master doing that in response to ANY child behavior. Does master log anything? was the way we had setup SMF on Solaris to control Cyrus IMAP. One needs to make sure SMF is setup to ignore core dumps and child processes signaling death otherwise SMF will restart the entire service. On Mon, 12 Jul 2010 18:36:59 +0100, Wesley Craig w...@umich.edu wrote: On 05 Jul 2010, at 10:56, Gavin Gray wrote: Two of them have had imapd processes crash and leave core dumps in the past couple of days. Looking at the core dumps with dbx we see I'm not aware of bug fixes in those code paths. Given how little those two code paths have in common, I'd suspect memory corruption. Also when this happens the cyrus master process kills all other active imapd processes and restarts, is there a reason for this? I've never heard of master doing that in response to ANY child behavior. Does master log anything? :wes -- Gavin Gray Edinburgh University Information Services Rm 2013 JCMB Kings Buildings Edinburgh EH9 3JZ UK tel +44 (0)131 650 5987 email gavin.g...@ed.ac.uk -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/ -- Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté. Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
Re: Reducing ZFS blocksize to improve Cyrus write performance ?
Am 09.08.10 17:22, schrieb Eric Luyten: Folks, A question for those of you running ZFS as the filesystem architecture for your Cyrus message store : did you consider, measure and/or carry out a change of the default 128 KB blocksize ? If so, what value are you using ? First: Changes to ZFS recordsize do not change the on-disk-format of your zfs/zpool. It just applies to NEWLY created files or file parts/zfs records (!). Second: As said on a ZFS volume the recordsize is NOT the block size. The record size is the size of a single ZFS record read at once. Due to the ZIL changes to files get written nearly sequentially so the recordsize is nearly irrelevant. A smaller record size is a good option if you notice an i/o bottleneck on your fiberchannel/iSCSI/SAS link. It won't bring you a performance gain in random i/o. There is a small exception: Database systems writing always the same fixed blocksize. For MySQL some people advise 32k. ZFS record size is not the same as zfs block size of a zvol (zfs block volume). That's another story. But I assume you are not talking about a ZFS block volume iSCSI server with a non-zfs-filesystem written on it. Just my $0.02, Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Reducing ZFS blocksize to improve Cyrus write performance ?
Am 09.08.10 17:33, schrieb Pascal Gienger: A smaller record size is a good option if you notice an i/o bottleneck on your fiberchannel/iSCSI/SAS link. It won't bring you a performance gain in random i/o. There is a small exception: Database systems writing always the same fixed blocksize. For MySQL some people advise 32k. Just another note: For us, gzip compression had a performance plus, reducing i/o bandwidth much better than a smaller recordsize (gzip compression for the mailstore, NOT (!) for the meta partition containing the cyrus.* files!). Just for your info as a reference, we're running happy with this: -bash-3.00$ zfs get all mail/imap NAME PROPERTY VALUE SOURCE mail/imap type filesystem - mail/imap creation Mon Aug 13 13:19 2007 - mail/imap used 1.58T - mail/imap available 4.96T - mail/imap referenced1.51T - mail/imap compressratio 1.61x - mail/imap mounted yes- mail/imap quota none default mail/imap reservation none default mail/imap recordsize128K local mail/imap mountpoint/mail/imap default mail/imap sharenfs offdefault mail/imap checksum on default mail/imap compression gzip local mail/imap atime offlocal mail/imap devices offlocal mail/imap exec offlocal mail/imap setuidofflocal mail/imap readonly offdefault mail/imap zoned offdefault mail/imap snapdir hidden default mail/imap aclmode groupmask default mail/imap aclinheritrestricted default mail/imap canmount on default mail/imap shareiscsioffdefault mail/imap xattr on default mail/imap copies1 default mail/imap version 1 - mail/imap utf8only off- mail/imap normalization none - mail/imap casesensitivity sensitive - mail/imap vscan offdefault mail/imap nbmandoffdefault mail/imap sharesmb offdefault mail/imap refquota none default mail/imap refreservationnone default mail/imap primarycache alldefault mail/imap secondarycachealldefault -bash-3.00$ Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Reducing ZFS blocksize to improve Cyrus write performance ?
Am 09.08.10 19:46, schrieb Vincent Fox: * Turn off ZFS cache flushing set zfs:zfs_nocacheflush = 1 For hardware (fiberchannel, iSCSI, SSA, ...) arrays with their own Cache this is a must. * Increase DNLC (Directory Name Lookup Cache) set ncsize = 50 vmstat -s | grep 'total name lookups' 135562914356 total name lookups (cache hits 96%) :-) Unless the percent ratio is not below 90% increasing the DNLC is not so useful. Turn off atime of course. Sure. Turn on LZJB compression for metapartition but gzip for the mail data filesystem. Our compression ratio on the mail filesystem is showing 1.68x. Yes. GZIP for Mail, LZJB for Meta. Identical configuration here. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Anyone using SunFire X4140's (AMD) or X4170's (Intel Xeon) as Cyrus servers ?
Le 15/03/10 17:08, Eric Luyten a écrit : X4200 here (also AMD, 2x dualcore). Operating system? Solaris x86 or Linux? Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Any interest to implement RFC4978 (IMAP COMPRESS)?
RFC 4978 [1] defines an IMAP COMPRESS command to compress IMAP data communication. Is there any interest to implement this extension in the cyrus imap server? For low bandwith connections this could be useful but I don't know if that's a typical case nowadays. Together with the IMAP IDLE command it should be fine for mobile devices... [1] http://tools.ietf.org/html/rfc4978 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Condstore Squat through IMAP
Nybbles2Byte schrieb: Hello Info-cyrus, When you use cyradm you can do these: mboxcfg mailbox_name condstore true mboxcfg mailbox_name squat true to set Constore and Squat respectively on a mailbox. However, I am writing a configuration program of my own to work through a web interface and I can't see how to do this through the normal IMAP command and response strings. For squat it is . SETANNOTATION user..mybox /vendor/cmu/cyrus-imapd/squat value.shared true Side note: Expiration time is set via . SETANNOTATION user..Spam /vendor/cmu/cyrus-imapd/expire value.shared 7 (for 7 days expire timeout on Folder Spam from user ). Correct me if I am wrong. As far as condstore: I never used that so I don't know the syntax. It should be pretty the same but I don't know the appropriate keyword after cyrus/imapd/. Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: ANNOTATEMORE = METADATA and rfc 5464
Bron Gondwana schrieb: Does anybody out there use annotations much? Does anybody know any code that would be broken by changing the way annotations are done? We are using annotations to define expire times for spam folders and to define mailboxes to be indexed by squatter. Approx. 4 annotations are always set. -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: VMware for Cyrus?
John Madden schrieb: FWIW, I won't run anything on hardware anymore unless I absolutely have to. To me, the benefits of running virtualized outweigh the pitfalls -- dealing with real OS installs on real hardware, dealing with multipathing and SAN (virtual disks are easy), etc. Our Cyrus runs on Solaris with proper ZFS storage. This kind of storage is fast, reliable and supports many nodes per directory without a problem. Files check for backup is done in two hours for 50 million files (Tivoli Storage Manager Backup). We just can't virtualize this because in whatever solution the underlying block devices get virtualized again. The only solution we would have is to bind these storage devices (fc) exclusively on the virtualized guest systems. Problem remains: Solaris 10 is not well supported in VMware (no client tools and without them access remains _SLOW_) nor in Xen/Sun xVM. In the latter OpenSolaris (and Solaris 11) is the way to go (architecture i86xpv). Just to give a reason why sometimes it _IS_ necessary or better to have a real iron. Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: No e-mail notification with sieve, Thunderbird and Cyrus-imap
Ludovic Gasc schrieb: Hi everybody, We're using Cyrus-imap during some time, it's a good tool for us. We've a strange behaviour (bug)? with sieve, Thunderbird and Cyrus-imap. I want to listen your opinions, because I'm not sure to understand correctly the problem. We use some sieve scripts to filter the e-mails in the sub-folders of INBOX. I never had this problem. Be sure to mark every subfolder you need with Check for new messages (right click on the folder you want to be checked, then click on Properties). Thunderbird opens a new IMAP connection for each folder. For each folder marked with Check for new message (Auf neue Nachrichten überprüfen in my case, I have a german localized Thunderbird) it will issue an IDLE command (easily traceable). Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Migrating 32bit to 64bit Debian Lenny
Simon Matter wrote: I'm wondering how much of all this was really needed for the migration from 32bit to 64bit? Are the BerkeleyDB ondisk files different on 32/64bit? Yes they are. It's not the OS that matters but the architecture of the libdb4.so file. It is still a good idea not to use Berkeley DB for real important data. Here at our university's cyrus we are using Berkeley for the duplicate delivery and the tls databases - both of them are easily set to zero in case of problems without deep impact on the functionality (in case the delivery db crashes users can get some mails two times (doubling), in the latter case (tls db crash) a returning client has to re-initiate a TLS handshake including key exchange). Pascal Gienger -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Logging region out of memory
Marc Patermann wrote: Hi, I have IMAPd 2.2.12 and BDB 4.2.52: When I got: Sep 2 11:28:39 rzhs199 local6:warn|warning lmtpunix[1171642]: DBERROR db4: Logging region out of memory; you may need to increase its size Increase logging region size. I found DB_CONFIG in /mail/imap/ :/mail/imap # cat DB_CONFIG set_cachesize 0 8388608 8 set_lg_regionmax 524288 set_lg_bsize 2097152 and these files The file DB_CONFIG has to be in the db subdirectory (/mail/imap/db in your case). Be warned: Some parameters of DB_CONFIG also change the on-disk-format, so backup your db files before (after shutting down cyrus) and restart after your changes. Don't delete skipstamp in this db directory as it is used by your skiplist databases. Just a personal biased hint: You should not use Berkeley DB for important data of your cyrus system. Berkeley has a very rapid random read performance which is important (in our case) with the duplicate delivery database (now 1,3 GB in size). But even that should be feasible with skiplist. Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: squatter exits with fatal error: Virtual memory exhausted on huge mailbox
Sebastian Hagedorn schrieb: Processing index character 101, 681642 total words, temp file size is 2107147 fatal error: Virtual memory exhausted 4 GB limit of 32 bit binaries? How much RAM does squatter allocate before it dies? -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: squatter exits with fatal error: Virtual memory exhausted on huge mailbox
Sebastian Hagedorn schrieb: fatal error: Virtual memory exhausted Of course it's possible that it then tried to allocate one huge chunk, but I can't see that. Are there better tools to monitor the memory allocation of a process? Swap file/partition full? Background: I think the message Virtual memory exhausted is coming from your operating system and not from the squatter process. Squatter would have been said switch (err) { case SQUAT_ERR_OUT_OF_MEMORY: fprintf(stderr, SQUAT: Out of memory (%s)\n, s); break; So I think it is a Virtual Memory/Swap problem in your OS. -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: tls_sessions.db will not created
Martin Schweizer schrieb: Hello I have the following system FreeBSD acsvfbsd06.acutronic.ch 7.2-RELEASE FreeBSD 7.2-RELEASE #1: Thu Jun 11 16:16:57 CEST 2009 mar...@acsvfbsd06.acutronic.ch:/usr/obj/usr/src/sys/GENERIC amd64 and I fresh installed Cyrus IMAPD v2.3.14. In my /var/imap directory the file annotations.db will create automaticly at each restart from Cyrus (if it's not there) but not tls_sessions.db. My compile options are: $ ./configure --sysconfdir=/usr/local/etc --with-cyrus-prefix=/usr/local/cyrus --with-cyrus-user=cyrus --with-cyrus-group=cyrus --with-tls-db=skiplist --wit h-sasl=/usr/local --with-bdb=db41 --with-com_err --with-openssl=/usr --with-perl=/usr/local/bin/perl5.8.9 --with-bdb-incdir=/usr/local/include/db41 --with-bdb -libdir=/usr/local/lib --with-snmp=no --prefix=/usr/local --mandir=/usr/local/man --infodir=/usr/local/info/ --build=amd64-portbld-freebsd7.2 a) Is SSL enabled? Did you try a connect via imaps or imap/starttls? b) what's in the log after this connect? c) is the tls_session.db there after your tls connect? The --with-tls-db-Switch should just define the default database backend for that database. It can be overriden at any time via imapd.conf. Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Security impact of lmtpd with pre-auth
Nikolaus Rath schrieb: Hello, Apparently (http://wiki.exim.org/CyrusImap) I need to let lmtpd accept connections from localhost as pre-authenticated to make cyrus and exim work nicely together. Can someone explain what this actually means security wise? I.e. what could a malicious user on localhost do with a pre-authed connection? He can put/deliver mail in whatever mailbox. The other side: If you have a malicious unix user on your Cyrus Box, you'll have a bunch of another problems, far aside from delivering mails to every mailbox... Delivering mails from localhost to localhost via lmtp with authentication has the problem that the sending side does need to now the credential. If the sending side knows that credential, a malicious user does have access to it because the sending side is on the same box, the same container, ... Just my $0.02, Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Security impact of lmtpd with pre-auth
Nikolaus Rath schrieb: But unless I have some exotic filtering and/or rate limiting configured, he can do exactly the same thing by connecting to localhost:smtp, or invoking sendmail directy, can't he? So why the additional protection for lmtp? Imagine a Cyrus Box only accepting LMTP connections, no sendmail, no Postfix, no other SMTP MTA running on it. Then imagine a frontend smtp relay delivering directly via LMTP over TCP to your Cyrus box. You can use lmtp auth then to prevent other machines from directly delivering mails via lmtp. Pascal -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: DBERROR with Cyrus 2.2.12
Christophe Boyanique schrieb: In fact I managaged to solve the problem by removing the quota file in /var/lib/imap/quota/x/user/ and using quota -f on the mailboxes. So the quota file was corrupt. Ok. Do you really use DB4 quota files? But I still get repeating error messages like this: Jul 7 14:58:51 mail7702 lmtpunix[8988]: DBERROR db4: 9 lockers Jul 7 15:04:24 mail7702 lmtpunix[9249]: DBERROR db4: 11 lockers Should I be worried by these messages ? No. In fact they are not errors but informational messages, so in newer versions of cyrus imapd you will see Jul 8 15:39:28 atlanta lmtpunix[29159]: [ID 366844 local6.info] DBMSG: 2279 lockers Jul 8 15:51:34 atlanta lmtpunix[29077]: [ID 366844 local6.info] DBMSG: 2249 lockers Jul 8 18:12:14 atlanta lmtpunix[4289]: [ID 366844 local6.info] DBMSG: 1583 lockers lmtpunix wants to open delivery.db which normally is a DB4 backed database unless you have changed the default in imapd.conf. Some old DB4 version had the problem of not releasing these locks so there was an overflow after some time. If the number of lockers keep growing without every becoming smaller from time to time you are running in this bug. -- Pascal Gienger University of Konstanz, IT Services Department (Rechenzentrum) Electronic Communications and Web Services Building V, Room V404, Phone +49 7531 88 5048, Fax +49 7531 88 3739 Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Automatically moving marked mails?
Ian Eiloart wrote: I was speaking to a friend who provides Exchange servers for small businesses locally. He says that the most important thing is to have a really good (fast, available and accurate) disaster recovery procedure, because you need it a lot. Here in Germany we have a bigger pressure. Microsoft offers university to get Exchange for free for the whole campus at Microsoft's cloud, so they want to offer a complete outsourcing. Sure, they don't have any procedure how to get all data out of Exchange after this for free period but they get very aggressive, writing directory to the board of directors of the university. Whilst it is complete nonsense that an internet cut results in non-mail-connectivity between one office to the other (how dumb is that, to write to your room neighbour, you have to go to via a remote exchange cloud...). Things are getting hard. We believe in open standards, we want to have our mails and appointments in a system which is at every time perfectly changeable. We don't want a data dead end resulting in a complete dependency on one manufacturer. Zimbra is another show stopper here. Many want Zimbra because it is soo cool and blah blah blah. But with 14,000 accounts, our central LDAP infrastructure and the Solaris 10 servers with ZFS, running Cyrus IMAP, there is no really good reason to migrate all to Zimbra just to have CalDAV calendaring. Zimbra means endless redo logs, bad performance with many accounts, ... ... I don't like these all in one solutions, but the people here LIKE THEIR OUTLOOK! Everybody wants to use Outlook and our students want Google, they like Gogle! Safe harbour for personal data? not interesting to this youth which even posts pictures of their drunk parties on facebook :-\ Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: unexpunge segfaults with -l on some mailboxes
Patrick Boutilier schrieb: Plus you lose all the messages that are in delayed expunge state after running a reconstruct. :-( Just delete cyrus.expunge in the appropriate mailboxes meta directory before running reconstruct. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Solaris10/ZFS
James M McNutt schrieb: We are currently running Solaris 9 with VxVM/VxFS and looking to move to Solaris 10 with ZFS. I was looking for some feedback from those using ZFS. what type of system? 2x X4200 with 32 GB RAM each what type of storage? 2x FiberChannel Storage Systems in two different locations, mirrored via zfs. how large? 2x8 TB mirror compression? gzip replication? zfs mirror problems? at the beginning (2006) many, nowadays X patches and kernels after, none left. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Doing Cyrus *right* for a large installation
Andrew Morgan wrote: But then I started thinking about how I was going to backup all this new data... Our backup administrator isn't too excited about trying to backup 12TB of email data. Just a side note: For most backup systems it is not the size which matters much (because only a small portion of data gets changed each day on a typical IMAP storage) it is the *NUMBER* of files, because - for an incremental backup - each file stat data has to be read and compared. With ZFS (gzip compression activated), we get 31 million files in 3-4 hours (example with tivoli storage manager): 01/03/09 23:38:48 --- SCHEDULEREC STATUS BEGIN 01/03/09 23:38:48 Total number of objects inspected: 31,305,555 01/03/09 23:38:48 Total number of objects backed up: 25,347 01/03/09 23:38:48 Total number of objects updated: 0 01/03/09 23:38:48 Total number of objects rebound: 0 01/03/09 23:38:48 Total number of objects deleted: 0 01/03/09 23:38:48 Total number of objects expired:308,991 01/03/09 23:38:48 Total number of objects failed: 0 01/03/09 23:38:48 Total number of bytes transferred:2.56 GB 01/03/09 23:38:48 Data transfer time: 280.87 sec 01/03/09 23:38:48 Network data transfer rate:9,594.01 KB/sec 01/03/09 23:38:48 Aggregate data transfer rate:209.05 KB/sec 01/03/09 23:38:48 Objects compressed by: 15% 01/03/09 23:38:48 Elapsed processing time: 03:34:49 01/03/09 23:38:48 --- SCHEDULEREC STATUS END 01/03/09 23:38:48 --- SCHEDULEREC OBJECT END SA_SO_MAIL 01/03/09 20:00:00 01/03/09 23:38:48 Executing Operating System command or script: /mail/bin/pr_backupsnapshot_off Just to give you a real life number... Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: choosing a file system
Henrique de Moraes Holschuh h...@debian.org wrote: Ext4, I never tried. Nor reiser3. I may have to, we will build a brand new Cyrus spool (small, just 5K users) next month, and the XFS unlink [lack of] performance worries me. Nobody likes deletes. Even databases used to mark deleted space only as deleted until a vacuum (Postgres) or other periodical maintenance command was run. Cyrus offers a similiar construct named delayed expunge. Before we migrated our mail system to Solaris 10 it ran on Linux 2.4 with XFS on a FC SAN device. Deletes were extremely slow so we had to delay the expunges until the weekend, even on night they were too slow and too IO congesting. On the other hand, XFS was the only Linux filesystems capable to handle our 5 million files (at that time, we're now at 33 million) we had in these days with an acceptable performance. Ext3 was way too slow with directories with 1000 files (but many things have changed from kernel 2.4.x to nowadays kernels), IBM jfs was not stable (it crashed during a high load test, which was an immediate k.o.). We were reluctant to use Reiser then as it was too new in 2001. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: choosing a file system
LALOT Dominique dom.la...@gmail.com wrote: zfs (but we should switch to solaris or freebsd and throw away our costly SAN) Why that? SAN volumes are running very fine with Solaris 10 hosts (SPARC and x86). You have extended multipathing (symmetric and asymmetric) onboard. Solaris accepts nearly all Q-Logic FC cards (according to my experience). Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: choosing a file system
Robert Banz r...@nofocus.org wrote: At my last job, we had explored a Dell/EMC SAN at one point. Those folks don't seem to understand the idea that Fibre Channel is a well established standard -- they only expect you to connect their supported stack of hardware and software, otherwise they don't wanna talk. Regarding to support as described by the support contract you are right - but I had many EMC big iron SAN devices running without a problem with Solaris 10. You have to adapt scsi_vhci.conf if you want symmetric multipathing as Sun does not recognize many of the FC devices which can handle symmetric links out there. ZFS with SAN devices is perfectly OK. We have 33 million files on our (single!) ZFS mail pool, running gzip compression (Solaris 10 Patch 137137-09 resp. 137138-09). Our Tivoli Storage Manager backup (tsm) runs every night for three hours approximately. Within this 3 hours it does scan all files. We do a zfs snapshot every day and we are holding 14 days snapshots to restore mailboxes. We are not conservatice enough to run scrub regularly, the last time I did was last week, without any error. A happy and successful 2009 for all of you! Pascal -- Pascal Gienger pas...@southbrain.com http://southbrain.com/ Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: IMAPS terminating abnormally
David Korpiewski [EMAIL PROTECTED] wrote: I have two cyrus machines running and on both systems I'm getting a TLS error and then the error in BUSY state: terminated abnormally. Which cyrus imapd version? Can you set the loglevel to debug in your syslog.conf? Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: IMAPS terminating abnormally
David Korpiewski [EMAIL PROTECTED] wrote: Thank you for asking questions, I'm very interested in getting this problem solved ASAP. I have turned on debugging level up by editing my /etc/syslog.conf file and adding this line: local6.debug /var/log/mailaccess.log However, I don't see any additional debug information (as shown below). I originally had local6.* which should have gotten the debug information anyways. The version of the OS is OSX 10.5.5 The version of cyrus is: (not sure if this is it, but) mail2:bin root# ./deliver 421-4.3.0 usage: deliver [-C alt_config ] [-m mailbox] [-a auth] [-r return_path] [-l] [-D] 421 4.3.0 v2.3.8-OS X Server 10.5: 9C31 Ok I have to pass. This is the Apple Version of their Mail Server, they included many extensions to the original cyrus code. They added netinfo support in SASL2, and Rendezvous/Zeroconf in IMAP. The only thing I know is that SSL handling has been improved since Version 2.3.8 (which is supposedly the version Apple used as its base). We are at 2.3.12, and 2.3.13 as Release Candidate. Did you open a service request Apple with this issue? If it is OS X 10.5.5 server you'll have support. If you can live without rendezvous, you can compile an actual release of Cyrus IMAP, using Apple's SASL2 library. So you won't give up netinfo capabilities. You will lose Apple support though. Cyrus IMAP 2.3.12 compiles fine under OS X 10.5 when the Apple SDK is installed (gcc et.al.). Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Mapping a username to a Mailbox via LDAP?
Daniel Dewald [EMAIL PROTECTED] wrote: I want the user to login with his AD credentials and still be routed to his correct mailbox. Is there a mapping feature in Cyrus for mailbox names I’m not aware of? It would be perfect if There are two SASL plugins dealing with login and user names: 1. canonical translates the given username to an internal username, which cyrus imapd uses as mailbox name (with user prefix user.). 2. auxprop takes the given username, retrieves the stored secret, and returns it to the SASL library. The auxprop can also do the mapping of the given username to another user name schema used in the authentication/secret database. It passes the given username unchanged to the imap daemon which will be the mailbox name then. In our setup, users do log in with their e-mail-address, but the mailboxes have our internal uid as name. A canonical plugin does the translation. In your case, a canonical plugin should convert the username into the sid, cyrus imap will use that as mail box name. Pascal Gienger Universität Konstanz Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Conversion Debian Cyrus 2.1 to 2.2, experiences
Paul van der Vlis [EMAIL PROTECTED] wrote: then convert the databases (on one line): find /var/lib/cyrus/ -name \*.db -print -exec /usr/bin/db4.2_upgrade {} \; db_upgrade: /var/lib/cyrus/mailboxes.db: unrecognized file type So mailboxes.db did not work, but the other databases did. Just a side note: I am pretty sure your mailboxes.db is a skiplist database which is AFAIK the default for mailboxes.db in Cyrus IMAP 2.1 and 2.2. No conversion is necessary. Do you have any database type declarations in your imapd.conf? Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus vs Dovecot
Mathieu Kretchner [EMAIL PROTECTED] wrote: Ian G Batten a écrit : We have mailboxes.db and the metapartitions on ZFS, along with the zone iteself. The pool is drawn from space on four 1rpm SAS drives internal to the machine: To give (hopefully) comparable comparison: We have our meta files and spool files also on ZFS, with mirrored pools: # zpool status pool: cyrus state: ONLINE scrub: resilver completed with 0 errors on Sun May 25 12:17:46 2008 config: NAME STATE READ WRITE CKSUM cyrus ONLINE 0 0 0 mirror ONLINE 0 0 0 c6t600D0230006B66680C50AB4F92F61000d0 ONLINE 0 0 0 c6t600D0230006C1C4C0C50BE4DFE511B00d0 ONLINE 0 0 0 errors: No known data errors pool: mail state: ONLINE scrub: resilver completed with 0 errors on Sun May 25 01:05:02 2008 config: NAME STATE READ WRITE CKSUM mail ONLINE 0 0 0 mirror ONLINE 0 0 0 c6t600D0230006B66680C50AB0F36ADF100d0 ONLINE 0 0 0 c6t600D0230006C1C4C0C50BE57396E9F00d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c6t600D0230006B66680C50AB5675F91300d0 ONLINE 0 0 0 c6t600D0230006C1C4C0C50BE16FF1FE200d0 ONLINE 0 0 0 errors: No known data errors cyrus is our log pool, mail our imap spool pool. IO ist mostly write: # zpool iostat mail 2 capacity operationsbandwidth pool used avail read write read write -- - - - - - - mail2.08T 6.02T226163 1.36M 1.67M mail2.08T 6.02T358 10 1.35M 94.4K mail2.08T 6.02T234599 1.08M 10.0M mail2.08T 6.02T 77 0 425K 3.98K mail2.08T 6.02T 85306 484K 3.39M mail2.08T 6.02T 95 8 405K 75.6K mail2.08T 6.02T107 6 798K 47.8K mail2.08T 6.02T 73232 281K 2.30M mail2.08T 6.02T 77 2 304K 9.95K mail2.08T 6.02T 66469 254K 5.84M mail2.08T 6.02T 83 4 409K 17.9K As with Ian's setup, most read requests are serviced from ARC. We have BOTH data (meta and spool) on this ZFS pool, however we defined an extra ZFS filesystem for metadata to make distinct snapshots. cyrus.header remains on the imap spool partition. Raw Disk I/O is different as ZFS pulls out up to recordsize from disk per request (128k by default). Load is 0.47 at the moment, 1355 imapd processes, 10 lmtpd processes (limited by delivering gateway), 34 pop3d processes. The machine is a two-processor Opteron (dualcore) machine, so 4 cores are available. It has 20 GB ram and ARC (zfs) uses: # kstat zfs:0:arcstats:size module: zfs instance: 0 name: arcstatsclass:misc size9308832256 9 GB zfs file cache. Hope this helps you a little bit. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus vs Dovecot
Mathieu Kretchner [EMAIL PROTECTED] wrote: I thanks you for your participation but not for the way you do it because if you had read my second mail on this topic you wouldn't have to ask your crystal sphere to wonder what is my configuration !! I did miss your 2nd post, I am sorry. J'ai manqué votre 2ème message, prière de m'excuser. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: [Dovecot] Cyrus vs Dovecot
Mathieu Kretchner [EMAIL PROTECTED] wrote: kbajwa a écrit : Cyrus = 0 Dovecot= 100 I guess you've right but I can't post this answer at Cyrus mailing list. I'm just trying to have my own opinion of imap server and I already have sarcastic answer on the cyrus mailing list ! Stop. What's this? a) crossposing content to the dovecot mailing list b) talking about sarcastic answers when users try to help you saying that migrating from an old cyrus release to a new one is easier then migrating to a new system? c) many users here have described their running configuration to help you. d) starting an advocacy war? What are you trying to do? Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus vs Dovecot
Mathieu Kretchner [EMAIL PROTECTED] wrote: Ok thanks ! A good point on the cyrus side ! What about the performance ? Depends on which size your scenario will be. Does my next configuration will run correctly ? Which hardware should I buy for this activity ? That's like asking a crystal sphere whether the woman you love is the right one to marry... How many users did use your old server? How many incoming messages per second? How much storage was used? How many clients did connect simulteanously to your old Cyrus server? At present, we have a lot of I/O, we wonder if the last version of cyrus is improved for this point ? We stored (before using graylisting) 30 messages/sec via Cyrus on two mirrored SAN volumes. 1200 imapd are running on peak times. No murder. 8 TB storage; 2TB used. 58,000 mailboxes; 12,000 users. 20 GB RAM, 14 GB ARC (zfs cache). SAN via Sun MPxIO (scsi_vhci). You see, without knowing the size of your old Cyrus solution it is not possible to say whether it is enough or not. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus vs Dovecot
Adam Tauno Williams [EMAIL PROTECTED] wrote: If you have allot of users and allot of mail you are going to have allot of I/O. No way around that, regardless of the server. Cyrus' indexes headers better now, so that might help. But you still need adequate through-put. I've found that /var/lib/imap (meta-data) needs fast I/O but /var/spool/imap (message store) doesn't due too bad tossed onto a SAN or slower disks. That depends how many mails per second you will have to deliver. Normally you are right, the meta partition is heavy random i/o. With high mail receiving rates you need a fast write storage for your imap spool as well. Filesystems like ZFS which write their contents nearly sequentially are ideal for this kind of work. And: Use a 64bit system and add much RAM. Your OS will (hopefully) cache all metadata which is heavily used in RAM. We are having 98% hits on our meta partition. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Any command to get rapidly ALL annotations?
Is there a way to get all annotations with one imap or cyrus command? We are using annotations here to be able to set an expiration time for spam mailboxes (messages older then x days are deleted automatically at night with cyr_expire). To get a tiny statistic we are going through all mailboxes and use GETANNOTATION to retrieve possible annotations, which is a time consuming progress. GETANNOTATION does not like wildcards like LIST. Berkeley DB db_dump is not a good idea either, because even with -p it gives database corruption in certain circumstances and it won't work any more when we move to skiplist for the annotation database. Pascal -- [EMAIL PROTECTED] http://southbrain.com/ Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus infrastructure performance less than expected
Eric Déchaux [EMAIL PROTECTED] wrote: I would have loved to put Solaris, Zones and Massaging Server here but it was not a possibility. Custorme chose was VMware + Linux + Cyrus. Just as a sidenote: As closed source is not an option here, we use cyrus imap 2.3.12 on Solaris and not messaging server. 10-12 GB RAM is used as ZFS ARC cache, and things are going fast enough now. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus infrastructure performance less than expected
Eric Déchaux [EMAIL PROTECTED] wrote: The older infrastructure can stand the 42 000 concurrent sessions, the new one can't : I was expecting each frontend to be able to handle 5 500 concurrent sessions but they are not. Around 3 000 / 3 500 concurrent sessions the frontends begin to SWAP and are not more able keep up the load. Did I undertand correctly: You are virtualizing each component and your frontends begin to swap in their virtual environment? Is there any reason why you don't assign more RAM to them? Does your frontend virtual machines run a 64 bit kernel? We abandoned all Linux for our Cyrus Servers and switched to Solaris 10 with Zoning and ZFS. We have less concurrent users than you but more storage (10 T at the moment). Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be locking issues
Jeff Fookson [EMAIL PROTECTED] wrote: Databases are all skiplist. As a rule of thumb, do not use skiplist for the duplicate delivery suppression database (deliver.db). Even if everybody hates it, use BerkeleyDB, Version 4.4.52 or higher. Give it a quite fair amount of shared memory. And run cyr_expunge often to prune that database so that no entry is older than - say - 3 days. We have approx 10-15 messages/sec incoming on one node. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Endgame: Cyrus big install at UC Davis
Vincent Fox [EMAIL PROTECTED] wrote: Several people have asked for the IDR number from Sun that gave us the performance optimizations we needed. Management says the agreement for the IDR we got prohibits this. I just opened a case at our Sun Representative so we hope to get this zfs patch as well. filebench varmail leaves us with 1,5 seconds (!) for an fsync(), which is really a big performance hit. semi off-topic As a side note: I had some fun with dtrace visualizing read and write access to our zpool mirror (ZFS mirror pool) on our mail server, consisting of two fiber channel RAID devices. You see that writes are nearly sequential (due to ZFS) and reads are random i/o with some hot spots. Sure, you don't have the effect of memory caching (ARC) included. This zpool is used for IMAP storage, queue spool and log files. All filesystems are compressed. You may find the animated GIF (be warned, it is 5 MB in size) in my little blog: http://southbrain.com/south/ /semi off-topic Pascal Gienger -- Rechenzentrum Universität Konstanz Computing Center University of Constance Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Endgame: Cyrus big install at UC Davis
Pascal Gienger [EMAIL PROTECTED] wrote: I just opened a case at our Sun Representative so we hope to get this zfs patch as well. filebench varmail leaves us with 1,5 seconds (!) for an fsync(), which is really a big performance hit. FYI: I just got a reply from Sun Support in Germany stating that the patch for bug 6535160 (the bug vincent fox submitted) will go public on sunsolve tomorrow. For x86 the patch will have id 127729-07. For SPARC it will have the major number 127728. So hopefully no need for special customer service patches for this issue. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: fosdem
Rudy Gevaert [EMAIL PROTECTED] wrote: Hi, Any anybody who is using cyrus heading to Fosdem (that's in Brussels, Belgium)? http://www.fosdem.org I would like very much to come but I did not find a registration link on the webpage. And love Brussels *g* Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Solaris ZFS Cyrus : BugID 6535160
Vincent Fox [EMAIL PROTECTED] wrote: With ZFS there are a number of hits with fdsync bugids. This connects back to I believe Pascal Giengers' thread, which I think resulted in him turning off ZIL to bump up performance. This is speculation at this point, anyhow here's the BugID information for those what's interested: I did not turn off ZIL because I did not want to bump my head out... Turning off ZIL is not a very good idea. You see, every file system can be tuned to be fast turning off all what does guarantee its consistency. Nevertheless I did not want to have a fast mail storage which is completely unreliable ;-) The zil lock problem appeared once but disappeared completely like a miracle. I have reasonable figures now on our servers doing reconfiguration on the storage side, it appeared to be a random i/o problem on storage, and Cyrus is _HEAVY_ random i/o. Not huge data rates but huge seek rates. You can give your storage system some hints, writes do appear in ZFS as sequential ones. Blocks are written in a sequential way (due to COW), but are read in a random way (for sure). Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus on Solaris at universities?
Vincent Fox [EMAIL PROTECTED] wrote: Just wondering what other universities are runing Cyrus on Solaris? We know of: CMU UCSB University of Konstanz, Germany Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LARGE single-system Cyrus installs?
Vincent Fox [EMAIL PROTECTED] wrote: This thought has occurred to me: ZFS prefers reads over writes in it's scheduling. I think you can see where I'm going with this. My WAG is something related to Pascal's, namely latency. What if my write requests to mailboxes.db or deliver.db start getting stacked up, due to the favoritism shown to reads? I got substantial benefits from setting compression=on and recordsize=32K on the filesystem where deliver.db resides. After talking with our SAN staff it showed up that storage was our problem - it has problems with concurrent write and read calls, the system won't answer read requests if the write channel is full. I don't know whether it is a firmwire issue or a non-capability of the storage system. Lowering ZFS' recordsize and activating compression on that partition cut down i/o rate and things are going normal here again. Thanks to all who helped! Pascal PS: The mirror resilvering problem was a misconfiguration of a brocade switch... Sometimes you don't see the forest due to so many trees (german proverb, Man sieht den Wald vor lauter Bäumen nicht)... Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LARGE single-system Cyrus installs?
Rob Banz [EMAIL PROTECTED] wrote: We went through a similar discussion last year in OpenAFS land, and came the same conclusion -- basically, if your filesystem is reasonably reliable (such as ZFS is), and you can trust your underlying storage not to lose transactions that are in-cache during a 'bad event', the added benefit of fsync() may be less than its performance cost. Would'nt it be nice to have a configuration option to completely turn off fsync() in Cyrus? If you want, with a BIG WARNING in the doc stating NOT TO USE IT unless you know what you doing. :) Pascal (in train of reconfiguring our SAN to make more cyrus checks) PS: Putting deliver.db on tempfs seems to be a nice idea, but in current cyrus code you may not give extra paths to single cyrus databases. Our actual deliver.db on one machine is ca 600 MB tall, so it won't be of any problem to store it completely on tmpfs. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LARGE single-system Cyrus installs?
Rob Mueller [EMAIL PROTECTED] wrote: About 30% of all I/O is to mailboxes.db, most of which is read. I haven't personally deployed a split-meta configuration, but I understand the meta files are similarly heavy I/O concentrators. That sounds odd. Given the size and hotness of mailboxes.db, and in most cases the size of mailboxes.db compared to the memory your machine has, basically the OS should end up caching the entire thing in memory. Solaris 10 does this in my case. Via dtrace you'll see that open() on the mailboxes.db and read-calls do not exceed microsecond ranges. mailboxes.db is not the problem here. It is entirely cached and rarely written (creating, deleting and moving a mailbox). Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LARGE single-system Cyrus installs?
Michael Bacon [EMAIL PROTECTED] wrote: I have heard tell of funny behavior that ZFS does if you've got battery-backed write caches on your arrays. /etc/system: set zfs:zfs_nocacheflush=1 is your friend. Without that, ZFS' performance on hardware arrays with large RAM caches is abysmal. Some arrays do have the possibility to ignore these flush requests although, but still following them when internal battery storage is faulted or in phase of regeneration (alltogether with setting write-through-mode on). Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Just in case it is of general interest: ZFS mirroring was the culprit in our case
Our latency problems went away like a miracle when we detached one half of the mirror (so it is no more a mirror). Read-Rates are doubled (not per device, the total read rate!), latency is cut off. No more latency problems. When attaching the volume again, resilvering puts the system to a halt - reads and writes do block for seconds (!). We will go on directly with Sun to solve the problem. Their lowest I/O-priority to resilver disks does not seem to be effective. It really blocks the kernel and you end up with thousand locks in zfs_zget. We have two SAN volumes in different buildings which are NOT the bottleneck, tests show it. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LARGE single-system Cyrus installs?
Vincent Fox [EMAIL PROTECTED] wrote: Our working hypothesis is that CYRUS is what is choking up at a certain activity level due to bottlenecks with simultaneous access to some shared resource for each instance. Did you do a lockstat -Pk sleep 30 (with -x destructive when it complains about the system being unresponsive)? We had that result, among others: Adaptive mutex block: 2339 events in 30.052 seconds (78 events/sec) Count indv cuml rcnt nsec Lock Caller --- 778 79% 79% 0.00 456354473 0xa4867730 zfs_zget 61 6% 85% 0.00 466021696 0xa4867130 zfs_zget 8 1% 87% 0.00 748812180 0xa4867780 zfs_zget 26 1% 88% 0.00 200187703 0x9cf97598 dmu_object_alloc 2 1% 89% 0.00 1453472066 0xa4867de0 zfs_zget 12 1% 89% 0.00 204437906 0xa4863ad8 dmu_object_alloc 4 1% 90% 0.00 575866919 0xa4867838 zfs_zinactive 5 1% 90% 0.00 458982547 0xa48677b8 zfs_zget 4 1% 91% 0.00 563367350 0xa4867868 zfs_zinactive 3 0% 91% 0.00 629688255 0xa48677b0 zfs_zinactive Nearly all locks caused by zfs. The Disk SAN system is NOT the bottleneck though, having average service times from 5-8 ms, and no wait queue. 456354473 nsecs are 0,456 secs, that is *LONG*. What's also interestring is tracing open()-calls via dtrace. Just use this: #!/usr/sbin/dtrace -s #pragma D option destructive #pragma D option quiet syscall::open:entry { self-ts=timestamp; self-filename=arg0; } syscall::open:return /self-ts 0/ { zeit=timestamp - self-ts; printf(%10d %s\n,zeit,copyinstr(self-filename)); @[open duration] = quantize(zeit); self-ts=0; } It will show you all files opened and the time needed (in nanosecs) to accomplish that. After hitting CTRL-C, it will summarize: open duration value - Distribution - count 1024 | 0 2048 |@80 4096 |@1837 8192 |@@ 521 16384 |@@@ 602 32768 |@@@ 229 65536 |@92 131072 | 2 262144 | 0 524288 | 1 1048576 | 1 2097152 | 1 4194304 | 3 8388608 | 12 16777216 |@51 33554432 | 38 67108864 | 25 134217728 | 9 268435456 | 2 536870912 | 3 1073741824 | 0 You see the arc memory activity from 4-65 mikroseconds and disk activity from 8-33ms. And you see some big hits from 0,13 - 0,5 secs (!). This is far too much and I did not figure out why this is happening. As more users are connecting this really long opens become more and more. We have a Postfix spool running on the same machine and we got some relief in deactivating its directory hashing scheme. ZFS is very angry about having a deep directory structure it seems. But still, these long opens do occur. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: CyrusIMAP for AMD64 Opteron ??
BipinDas [EMAIL PROTECTED] wrote: Dear List, Is there any specific source of CyrusIMAP for AMD 64 Opteron series ?. I am getting strange error while compiling CyrusIMAP2.3.1 on the above said server. The error is as follows. - - -- LD_RUN_PATH=/usr/lib64:/lib64 gcc -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic IMAP.o -o blib/arch/auto/Cyrus/IMAP/IMAP.so ../../lib/libcyrus.a ../../lib/libcyrus_min.a \ -ldb -lsasl2 -lssl -lcrypto \ /usr/bin/ld: cannot find -lssl collect2: ld devolvió el estado de salida 1 make[2]: *** [blib/arch/auto/Cyrus/IMAP/IMAP.so] Error 1 make[2]: se sale del directorio `/opt/src/cyrus-imapd-2.3.1/perl/imap' It can't find your 64bit-SSL-libraries to build the perl modules. Is your perl up to date? Is the compiler used to build perl the same as the one used to compile Cyrus IMAP? If you can't find the solution, just skip the perl target in the Makefile, the perl modules are not needed to run cyrus imap. Which operating system are you using? Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: UC Davis Cyrus Incident September 2007
Vincent Fox [EMAIL PROTECTED] wrote: The root problem seems to be an interaction between Solaris' concept of global memory consistency and the fact that Cyrus spawns many processes that all memory map (mmap) the same file. Whenever any process updates any part of a memory mapped file, Solaris freezes all of the processes that have that file mmaped, updates their memory tables, and then re-schedules the processes to run. Now we come closer, I experience the same behaviour when loads are going up. Which file is mmap'd by _all_ Cyrus processes? I understand that the local index files in every mailboxes are mmap'd after a customer logged in to an imap process or when a delivery via lmtpd is being made in this mailbox, but which global file is mmap'd by all processes? mailboxes.db using skiplist? Does the problem also arise with 2000 processes using the same db4 library to access the same berkeley database? Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: LIST is slow for 35K mailboxes
Ian G Batten [EMAIL PROTECTED] wrote: On 09 Oct 07, at 1522, Blake Hudson wrote: Could database type differences (or contention) be an issue here? What database format are each of you using? Yes. With skiplist (took me several stabs at it to get the conversion to work) it takes 0.19s. Versus ~250s with BDB. A slight difference. Thanks for pointing me at the idea. Yes, this is the expected result. Skiplist is good in enumerating as there is always a lane pointing from one mailbox to the next one without skipping. Berkeley DB is very fast in doing _real_ random accesses and to insert new or changed values, but it does a lousy lob in enumerating entries alphabetically or numerically. So it is perfect to use Berkeley for the tls session database, for annotations, for duplicate delivery suppression and - as Bron Gondwana does - to cache status data. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN
You wrote: By the way I tried this on a fully patched Solaris 10u3 system and get this notice during boot: sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' module We have Solaris 10 08/07. Also fully patched. Kernel is SunOS 5.10 Generic_120012-14 i86pc This is 10u4 for x86. The SPARC version U4 has the same ZFS vars. They changed some zfs variables and are doing the same again in their Nevada release. As I am on vacation at the moment I don't have the u3 info here. I'll return Sep 29th. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
RE: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN
Vincent Fox [EMAIL PROTECTED] wrote: Pascal, How many accounts did you have per mail-store? We have 14k(*) users and a failover mailstore. In our SAN we have 3,5T storage reserved, which can be expanded to our needs. We use 2 SAN storages, location-separated, so when one location gets fire, the system still works. The mirroring between them is done via ZFS, the redundancy between the two FiberChannel-Links to each storage is done via scsi_vhci. Pascal (*) 100k mailboxes and a webmail application which often LOGINs and SELECTs, multiple times per webpage reload. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN
Wesley Craig [EMAIL PROTECTED] wrote: I might suggest running a connection caching daemon, up-imapproxy springs to mind. LOGIN SELECT are not exactly lightweight. Actually, login is lightweight and never was source of performance misses. As the webmail applications are talking via a private subnet to our cyrus servers there is no need to use some highly secure algorithms, and cram-md5 is fast. For the SELECTs, I will try to use Bron's patch to install a status database. Seems to be a smart idea, on my test system here it runs already. A proxy would mean another possibility of failure. How stable is that proxy daemon? Another two boxes for redundancy? In our installation it is not possible to log on as cyrus or other admin users from outside, I patched the imapd to check the caller's ip address - and the firewall does the rest so that no forged packet (with an inside sender ip coming from outside...) will trigger it. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN
Bron Gondwana [EMAIL PROTECTED] wrote: But yeah, connection caching is nice. Even just the fork overhead on the backend servers is something we can do without if it's avoidable. My own Perl IMAP mod_perl routines use this: http://search.cpan.org/~mws/ResourcePool-1.0104/lib/ResourcePool.pod Works flawlessly - but you still have the problem that you have X connections per user for X httpd processes running. I/We wrote a web frontend for those who don't want to use Webmail (the majority do use their own MUA), but still want to control their anti spam settings, quotas, automatic vacation responses, automatic expiration of messages in user-selectable folders (via annotations.db). We also use annotaton heaviliy to preempt spam folders to be indexed by squatter. Pascal Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN
Just a little note, for those who have perhaps the same problem. We saw performance problems after we switched from a Linux installation to a Solaris 10 cluster connected to our SAN (using scsi_vhci and 2 Qlogic Controllers). Problems arose when real load came to the machine, despite having tested it with some load simulation scenarios... Looking at the attached chart[0], you will notice up to 10 seconds time to select a mailbox, far too long. The problem was ZFS - ZFS is perfect for rubbish, cheap disks with ugly firmwares. So if your storage is too good it makes things go bad. First: The file prefetch algorithm does not seem to be very good for 20 million mail files and 300,000 cyrus meta files... ;-) So first step was (at 11am in the chart) to disable this prefetching routine. [1] This cut off 4 seconds from requests, but still 10 second-timeouts were seen. Next step was to disable the zfs cache flush. As said, ZFS is good for rubbish disks, so every 5 seconds it instructs the SATA or SCSI drives to flush their internal ram to disk. This is good for cheap disks, but a no-go if you have a SAN RAID storage having 2 Gigabytes and more of RAM storage, buffered with a battery. In fact, our storage system really flushed all 5 second its complete RAM cache, you even saw it watching the blinkenlights. Plus, every fsync() call did the same... :( There is a trick: You can disable this in ZFS [2]. You see the result at 4pm (16:00 european notation in the chart). CHILDREN DO NOT TRY THIS AT HOME. Only do this if you don't have any real physical disk storages attached to your system with zfs pools on them - otherwise you will lose data on power outages as the ram cache of your hard disk is not buffered by a battery. Now the machine runs and handles all mail without noticeable delays. [0] http://priscilla.rz.uni-konstanz.de/mailserver/ [1] in /etc/system: set zfs:zfs_array_rd_sz=0 on a live system using mdb -kw: zfs_array_rd_sz/Z0x0 [2] in /etc/system: set zfs:zfs_nocacheflush=1 on a live system using mdb -kw: zfs_nocacheflush/W0t1 attachment: mailselect.png Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Penalty timeout bug in pop3d
Pascal Gienger [EMAIL PROTECTED] wrote: Hi, was there a special reason to forget the sleep(3) penalty timeout after an invalid user auth nak message? I did that because we had a POP3 client running wild while trying out unknown sasl users... Imapd version is 2.3.9, diff -c is as follows: # diff -c pop3d.c.orig pop3d.c *** pop3d.c.origMon Sep 17 13:57:19 2007 --- pop3d.c Mon Sep 17 13:58:01 2007 *** *** 1217,1226 !(config_virtdomains /* allow '.' in dom.ain */ (domain = strchr(userbuf, '@')) (dot domain))) || strlen(userbuf) + 6 MAX_MAILBOX_NAME) { - prot_printf(popd_out, -ERR [AUTH] Invalid user\r\n); syslog(LOG_NOTICE, badlogin: %s plaintext %s invalid user, popd_clienthost, beautify_string(user)); } else { popd_userid = xstrdup(userbuf); --- 1217,1227 !(config_virtdomains /* allow '.' in dom.ain */ (domain = strchr(userbuf, '@')) (dot domain))) || strlen(userbuf) + 6 MAX_MAILBOX_NAME) { syslog(LOG_NOTICE, badlogin: %s plaintext %s invalid user, popd_clienthost, beautify_string(user)); + sleep(3); + prot_printf(popd_out, -ERR [AUTH] Invalid user\r\n); } else { popd_userid = xstrdup(userbuf); -- Pascal Gienger Rechenzentrum Univ. Konstanz Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Weird cyrus user
Bob Marcan [EMAIL PROTECTED] wrote: May 17 10:41:06 populus2 lmtpunix[15310]: DBERROR: error fetching user.avucajnk: cyrusdb error Does a reconstruct -f -r user.avucajnk help? If so, the cyrus.*-Database-Files were corrupted in the users' maildirectory. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Mailbox is locked by POP server
former03 | Baltasar Cevc [EMAIL PROTECTED] wrote: Hi Martin, Personally I do not use POP3 any more - however I remember that there was a limitaition of 1 connection per mailbox in other servers. Yes. The POP3 RFC states that operations such as DELEte a message will become final after issuing a QUIT command. If connection breaks, no change is made to the mailbox. For having multiple concurrent POP3 access to a mailbox a complete transaction based model including rollbacks would be needed to implement proper POP3. I don't think anybody wants to improve pop3d to accomplish that ;-) Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Does the quota include deleted but not yet expunged mails in v2.3 with delayed expunge?
David Carter [EMAIL PROTECTED] wrote: On Thu, 9 Nov 2006, Farzad FARID wrote: I'm running Cyrus Imapd 2.3.7 with the delayed expunge mode. Do the messages deleted by the user, but not yet expunged by the system, count in the user's quota? I'd say yes but I'd like a confirmation. Yes. \Deleted is just another flag on messages. He does not talk about deleted and not yet expunged mails, he talks about the delayed expunge mode. That means, the user expunges the mailbox, so all messages marked with \Deleted are expunged and removed from the user's quota. But the messages still remain physically in the file system until they are really expunged by the expire process. This is helpful if you want to be able to restore accidently deleted files very fast or if your file system has a very bad unlink performance, like xfs. So Mr Farid, messages deleted and expunged by the user do not count in the user's quota even if the delayed expunge mode is turned on. But keep in mind that the delayed messages still gobble up space in your filesystem. Pascal Gienger Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Replication woes with a specific mailbox...
We had a strange exception here, a user's inbox could be replicated without problems, but doing it for a second time does not do it. sync_client was invoked like this: sync_client -v -u X Jul 28 10:52:17 priscilla sync_client[21326]: RENAME received NO response: Rename failed user.X - user.X.Uni: Operation is not supported on mailbox Jul 28 10:52:17 priscilla sync_client[21326]: do_folders(): failed to rename: user.X - user.pX.Uni Jul 28 10:52:17 priscilla sync_client[21326]: Error in do_user(X): bailing out! sync_client seems to try to RENAME the Inbox to the subfolder Uni which is complete nonsense. It is the only mailbox where this appears. Replicating in mailbox mode does work however. I did a mailboxes.db-Dump (ctl_mboxlist -d) and could not find any inconsistencies with that user's mailboxes, including the one named Uni. Anyone had the same problem? :) Difficult to submit a bug report because I can't tell the exact condition when this appears. The username was replaced by X for privacy reasons. Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Replication woes with a specific mailbox...
David Carter [EMAIL PROTECTED] wrote: Do the mailboxes have the same UniqueID (see cyrus.header files)? The replication engine expects UniqueID to be unique. Cyrus makes a bit of a hash of renaming user inboxes (user.XXX - user.XXX.Uni). Removing the cyrus.header file and running reconstruct should fix the problem. That fixed the problem. Thank you! I wonder why these IDs were unique... Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
RE: High availability email server...
David S. Madole [EMAIL PROTECTED] wrote: That's just not true as a general statement. SAN is a broad term that applies to much more than just farming out block devices. Some of the more sophisticated SANs are filesystem-based, not block-based. This allows them to implement more advanced functionality like cross-platform sharing of volumes, simultaneous mounts of volumes from different hosts, backups (and single-file restores) performed by the SAN system, pooling of free space, transparent migration to offline storage, etc., etc., etc. In my classical view a SAN is a network used for storage applications to give a view on shareable block devices. There are hardware applications giving access to the same filesystem in a shareable manner (as GFS or ocfs) but this is software logic in the filesystem and firmware level and not in the classical SAN components like JBOD arrays, RAID controllers and FC or IP switches. In the Apple case we need to distinguish Apple XSAN Harddisk chassis and the XSAN software. The XSAN software seem to give you a special filesystem for SAN issues (at least I read this on their webpage). So if Apple says that this is not suited well for many small files I would not use it for that. Another instance of a SAN filesystem that I do happen to be familiar with is IBM's: http://www-03.ibm.com/servers/storage/software/virtualization/sfs/index.h tml Also this filesystem lives above the FCP (Fiberchannel) protocol forming a filesystem including multipathing elements and concurrent access strategies. Still you have to distinguish the block-level access to SAN devices and the filesystems build above them. It is true that SAN is marketing speech for all kind of things. Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: missing plain authentication?
Ross Boylan [EMAIL PROTECTED] wrote: No; that was a transcription error. Sorry about that. So the original file has allowplaintext: yes This is the traditional imap plaintext login without sasl. IMAP4 has plaintext authentication as a builtin. The syntax is A001 LOGIN username password You may replace the prefix A001 with a dot but this will only work for cyrus not for many other imap servers out there. Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: lmtp over tcp sockets, access denied and lmtp error: Message contains invalid header
Rudy Gevaert [EMAIL PROTECTED] wrote: Now postfix complains: Jul 21 15:25:42 oliebol postfix/qmgr[7484]: 2A9BA7458: from=[EMAIL PROTECTED], size=348, nrcpt=1 (queue active) Jul 21 15:25:42 oliebol postfix/lmtp[7490]: 2A9BA7458: to=[EMAIL PROTECTED], relay=none, delay=4236, status=deferred (connect to mail2.ugent.be[157.193.71.18]: Connection refused) Postfix seems to connect to a wrong port number. What's in your master.cf and main.cf regarding that lmtp transport? I want to test my lmtp setup, but this fails too: No it did not :) oliebol:/etc/postfix# telnet mail2.ugent.be 2003 Trying 157.193.71.18... Connected to mail2.ugent.be. Escape character is '^]'. 220 mail2.ugent.be LMTP Cyrus v2.3.7 ready LHLO foo.edu [...] mail from:[EMAIL PROTECTED] 250 2.1.0 ok rcpt to:[EMAIL PROTECTED] 250 2.1.5 ok DATA 354 go ahead daf [...] . 554 5.6.0 Message contains invalid header That is normal. daf is not an allowed mail header. You seem to have configured 2003 as your lmtpd port in your /etc/services of your cyrus host. Does lmtp use the same port? Pascal Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: help on failed upgrade
[EMAIL PROTECTED] wrote: Hi there! Last week we upgraded our server from cyrus 2.1.15 (SuSE 9.0) to 2.2.21 (SuSE 10.1). we had some /var/lib/imap problems that prevented us from even starting cyrus. We managed to boot it up after deleting some databases in db/ and backup/, after following some thread in a forum. The result, all email messages stored lost their state (read, unread, The standard for the seen DB was changed from flat to skiplist as my memory recalls on that suse package. You should see database errors in your syslog coming from lmtpd and imapd. You need to convert all your seen-databases to skiplist if you want to use the default configuration. New states will not be added because the seen_db-engine can not even read your old flat files. Workaround (ONLY if you see such errors in your log stating that seen_db is not readable): put seenstate_db: flat in your imapd.conf and do a rcimap restart, and things should just continue as they did. Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Marking Messages going to Cyrus as read procmail.
Hexren [EMAIL PROTECTED] wrote: Now what I think I need is the following: Based on insert header here I want to deliver the mail and have cyrus mark it as read immediatly. Why don't you consider using sieve for that task? Pascal Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Client authentication via client certificate on ssl/tls
Hi, does anybody on the list already had the idea to use an information of a client certificate for authentication in IMAPD? There could be 3 solutions for it: 1. the TLS part can pass information of the presented client certificate to imapd, so a normal anonymous login would be sufficient - the imapd process would use an attribute of the client certificate as user-id. 2. Using an external X509 SASL mechanism - but this requires special software on client side and you would present your client certificate 2 times: First in SSL handshake and second via AUTHENTICATE. 3. you could use Kerberos 5 and a special signon program to get your ticket and use GSSAPI as SASL mechanism. 1. has to be done in imapd and pop3d code. 2. has to be done via an x509 auxprop or external x509 authenticator. 3. means to build a kerberos5 infrastructure around failover kdc's. This may work well for Windows boxes but how about other operating systems? Would be nice to use client-cert-ssl on the whole campus to login for services. Pascal --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: reading quotas
Colin Bruce [EMAIL PROTECTED] wrote: Dear All, This may be a stupid question but is there any reason why someone should not be able to obtain their own e-mail quota. If I do something like telnet imaphost 143 . login ccx004 password . getquota user.ccx004 I get NO Permission denied GETQUOTA is reserverd to a user with admin privilege (admins: in imapd.conf). Use GETQUOTAROOT instead to get your quota usage: telnet server 143 . login myusername mypasswort . getquotaroot INBOX * QUOTAROOT INBOX INBOX * QUOTA INBOX (STORAGE 519605 3145728) . OK Completed Pascal --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: complett relocation from old to new server and cyrus version
Hans M. Schleidt [EMAIL PROTECTED] wrote: Hi. I hat to rebuild a new server with fully new cyrus. Now i must bring the old messages to the new cyrus. Only copy get wrong. What are the goals to do that? Wich files (mailboxes.db, cyrus.cache, cyrus.header, and so on) shout i bring it to the new system? 1. Dump your mailboxes-Database on the old system: ctl_mboxlist -d mailboxes.dump Copy the resulting mailboxes.dump to the new server. 2. Copy the imap mail store 1:1 to the new server: example (here on both machines, the store is in /var/spool/imap, the target directory must exist and be writable by the cyrus user): cd /var/spool/imap tar cvpf - . | ssh [EMAIL PROTECTED] cd /var/spool/imap; tar xpf - 3. Copy the following directories to the new system (do it like you have done with your mail store): - Sieve(e.g. /var/imap/sieve) Databases: - Quotas (e.g. /var/imap/quota) - Seen/Subscribed (e.g. /var/imap/user) Warning! This will work only if your new cyrus system uses the same database backends as the old one! Losing the databases results in the following: - Quotas: The user will not have any quota restriction anymore. - Seen/Sub: The user will not see which messages are marked read nor it will now which mailboxes he was subscribed to. 4. Reconstruct a new mailboxes-Database on the new system: ctl_mboxlist -u mailboxes.dump 5. Run reconstruct on the new system. reconstruct -f 6. When used quotas on the old system, run on the new one: quota -f 7. Tricky part: SASL user database When sasldb was used, then: If your new sasl lib uses the same database backend as the old one, you may simply copy your old sasldb to the new server when these conditions are met: - Your realm is the same as on the old server. - The database backend is the same You may still set the same realm/imap server name as the old one in your imapd.conf. If your IMAP realm is your hostname or your sasl database backend is another one, you must use a program to dump the contents of the old sasldb. As passwords are stored in clear- text it is not very difficult to accomplish that. Pascal --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Mailbox list in txt file
Zitat von Bartosz Jozwiak [EMAIL PROTECTED]: This is what i get: bartek:/# su - cyrus bartek:/# /usr/cyrus/bin/ctl_mboxlist -d fatal error: must run as the Cyrus user Your cyrus user account does not have a valid shell or home directory. Check this. Pascal Gienger --- Home Page: http://asg.web.cmu.edu/cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Re: Is Reiserfs better than ext3
mb mb/[EMAIL PROTECTED] wrote: PS I know some people say XFS, JFS etc are brilliant but my local experience is that although they may be faster in Linux 2.4, they are more prone to corruption.. (haven't tried either in the last 6 months tho') As for XFS, I never had any problems with it (Version 1.2.0). We are running a 600 GB Mailspool with it. I would call it reliable. Pascal
Re: Changelog, LDAP features
Thomas Luzat [EMAIL PROTECTED] wrote: a) Fetching Sieve-scripts from LDAP (would guess not) b) Fetching Quota settings from LDAP (same) It's probably best to write some LDAP-Cyrus gateway for that, right? For the university of Constance I wrote a little Daemon program which synchronizes OpenLDAP with Cyrus databases (and mailboxes) - because they did not want to compile the postfix and cyrus things themselves (lack of support). It uses the rather simple openldap-replication-mechanism to accomplish this. For the staff there, the postfix/cyrus server is completely in the LDAP tree including passwords, quotas, forwards and autoreplies (via a special autoreply program, also written by myself because most of them lying around send out too many autoreplies (to lists, errors, ...) and could not take the autoreply message via LDAP. Here we use the standard autoreply-LDAP-attributes.). Mailboxes get created automatically when an LDAP entry comes in and it gets deactivated when it is removed. So the user support personnel can just create an LDAP entry to make a valid postfix-alias and cyrus mailbox available immediately. Works like a charm but it is not very elegant (I must admit it). I did not find any other solutions than to write it on my own. If there are other solutions, let me know. I packaged the whole system to a package named priscilla - Pascal
Re: Changelog, LDAP features
Simon Matter [EMAIL PROTECTED] wrote: The University of Athens is doing some cool work here http://email.uoa.gr/projects/cyrusmaster/ I'll take a look at it! What is the license of your package, can it be downloaded somewhere? Yes it will be next week. Had enough time to test it, it works in production for approx 15000 users since 6 months now - the daemon logs activity via syslog. It is not a click here to install-Package though, you will have to read readme-Files and edit some configurations with a text editor... See it as a glue between cyrus and OpenLDAP. Pascal
Re: Mailbox
Norman Zhang [EMAIL PROTECTED] wrote: Hi, I created a mailbox nzhang by 1. cyradm localhost 2. cm user.nzhang 3. quit 4. saslpasswd nzhang Err it should be saslpasswd2 - otherwise you are using a SASLv1 linked imap server, e.g. Cyrus 1.6.x... But I don't see nzhang created under /var/spool/mail. I did set mail_spool_directory = /var/spool/mail in /etc/postfix/main.cf. Is there something trivial that I'm missing? Do I need smtpd.conf (I can't find this file)? Won't postfix take care of mail transfer? mail_transport = lmtp:$myhostname No... Use fallback_transport = lmtp:unix:/path/to/your/lmtp/socket/lmtp if you want to get mail delivered in your cyrus imap spool and local unix users will still have their /var/spool/mail-Files. The Cyrus mail system does NOT use normal mailfiles, it has its own spool. Pascal
Re: Hight Aviability and Cyrus
Earl R Shannon [EMAIL PROTECTED] wrote: Bonjour, While there have been a couple of mentions that high availability is being considered by CMU, it has not been done natively to the IMAP server. In other words, the IMAP server does not do high availability. While it does have the cluster implementation, ( the murder ) this allows scalability, not high availablity. The only possible thing which can be setup in a timely doable manner is a failsafe cluster configuration. Use e.g. a little SAN Network with dual FC-Switches so that both computers forming the IMAP Cluster see the same partitions. Use some kind of cluster software to do it. Writing an agent for Veritas Cluster is not difficult, nor it is for the Linux HA project. Doing this for parallel clusters is quite of lot of work, because you would have to keep much data in sync. Pascal
Re: Postfix can't connect to lmtp
[EMAIL PROTECTED] wrote: (connect to /var/imap/socket/lmtp[/var/imap/socket/lmtp]: Permission denied) The file system permissions of your socket file /var/imap/socket/lmtp(*) are set in a way so that the postfix process (running under user postfix) just can't write to it. Pascal (*) The name and location of the socket is defined in your cyrus.conf, lmtpd(8) via unix socket transport.