Re: [Dovecot] Disconnect users for a distinct period of time?
Quoting Michael Grimm trash...@odo.in-berlin.de: Hi -- Today, I've learned who to disable incoming mail delivery to dovecot (accepted by postfix), temporarily. Now, I would like to know if there is a way to make a doveadm kick -A a permanent disconnect until one would allow reconnects, again? This should be done *without* shutting down dovecot or *extensive* re-configuration. A doveadm block -u joe -m 'sorry: running maintenance, please, come back later' alike tool/funtionality would be ideal. What I've been doing so far, is I configure dovecot 2.1 with a deny-hosts file (in auth-deny.conf.ext) and then I can append a name to that file to block them, and remove them to allow them back in. Since this file is a simple text file with one username/address per line, it is very easy to manage. No idea if that would work for you, since your usage is for a different reason than mine. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Performance-Tuning
Quoting Peer Heinlein p.heinl...@heinlein-support.de: It would be MUCH easier if Dovecot could read maildir: or mdbox: from LDAP attributes. In this case the whole migration process could be split up into groups. Unfortunately we have shared folders and I don't know a way to read the *remote* mailbox-format from LDAP... So having users with maildir and mdbox mixed up will break their shared folders... May not work for you, but... The way I did this when I migrated was to run two dovecot instances, and have perdition software on a front-end (could be on the same machine instead of a front-end, I just happen to have a front-end machine to do it). Perdition will query ldap for the info per user/connection, and send the connection to the correct dovecot instance based on the ldap lookup. Worked for me, your milage may vary... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Performance-Tuning
Quoting Peer Heinlein p.heinl...@heinlein-support.de: The problem is: You're running in problems with shared folders. You can't read your neighbors storage-engine from ldap. Yes, but I didn't have any shared folders, so it worked. Your milage may vary, as I said... :) -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] GFS (Was: dovecot Digest, Vol 93, Issue 41)
Quoting Alan Brown a...@mssl.ucl.ac.uk: I can't speak for OCFS2, but after several years' experience with the filesystem I strongly recommend NOT using GFS/GFS2. GFS2 works for me. GFS was a bit slow, but GFS2 meets my (perhaps low) needs. Of course, my use case is different (e.g., I don't use it with quotas, in fact I've stayed away from it anywhere I need quotas). Its locking model is incredibly slow (500 locks/second on a filesystem mounted with quotas enabled and noatime) and results in dire performance - plus Did you raise plock_rate_limit from its defaults? The defaults will indeed suck. there's a known crash vulnerability if files are repeatedly renamed in large directories (this bites us regularly...) Never had that happen... But then, my applications don't repeatedly rename files in large directories (e.g., I don't use maildir) and I don't use quotas on GFS. GFS2 isn't an enterprise filesystem by any stretch of the imagination, despite what a number of enthusiastic salespeople might try to convince you of. We're lucky to keep the GFS servers up for more than a week at a time. GFS isn't for all applications. I've used it for 2 different applications for which it has proven well suited. Every discussion I've seen about GFS has always said don't use it with large directories of small files so if that is your use case, then you must be ignoring common wisdom... Given the right use case (including dovecot with mbox and dovecot indexes) it seems to work fine... I've used it for another project also without problems (been running for years now in both cases). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Patrick Westenberg p...@wk-serv.de: just to get it right: DRBD for shared storage replication is OK? Yes, but only if done correctly. ;) There is some concern on Stan's part (and mime) that you might do it wrong (e.g., in a vm guest rather than at the vm host, etc). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Patrick Westenberg p...@wk-serv.de: Eric Rostetter schrieb: Quoting Patrick Westenberg p...@wk-serv.de: just to get it right: DRBD for shared storage replication is OK? Yes, but only if done correctly. ;) There is some concern on Stan's part (and mime) that you might do it wrong (e.g., in a vm guest rather than at the vm host, etc). My storage _hosts_ will be dedicated systems of course :) No problem then... I run dovecot off drbd+gfs2 now without problems (no virtual machines involved though, just physical machines). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Stan Hoeppner s...@hardwarefreak.com: DRBD is for mirroring physical devices over a network. You might be able to do DRBD inside a VM guest, but to what end? What sense does it make to do so? It doesn't really make sense, and it can cause problems... What problems depends on your VM implementation (Xen, KVM, VMWare, VirtualBox, etc). Some things might be mount table propogation between hosts/guests causing problem unmounting drbd paritions or even shutting down the VM, running lots of drbd instances instead of only one (as Stan mentioned) which can mean more processes and more (buffer) memory being used than is needed and more configuration files needed, performance issues of all kinds, and so on. Think about it: you are increasing the number of processes in the VM guests, you are increasing the amount of memory used in the VM guests, you are increasing traffic to the virtual switch/bridge, you are potentially increasing the complexity of your configuration, possibly taking a big performance hit (depending on VM type and config), limiting your flexibiilty, increases difficulty of debugging and performance tuning, and so on. Is it worth it? Plus, if you run DRBD in the VM, then the VM must run DRBD. If you run DRBD in the physical host, you can then export it to any VM, even a VM that doesn't support DRBD. Things like this can impact VM flexibility (migrations, OS support, backups, etc). Can you do it? Yes. Can you get away with it? Probably. Should you do it? No. Would I do it? Never... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Jonathan Tripathy jon...@abpni.co.uk: Generally, I would give an LVM LV to each of my Xen guests, which according to the DRBD site, is ok: http://www.drbd.org/users-guide/s-lvm-lv-as-drbd-backing-dev.html I do not use img files with loopback devices Is this a bit better now? There are implications of whether you do drbd+lvm or lvm+drbd when it comes to things like lvm snapshots, growing/shrinking lvm volumes, etc. Some thought may be needed to make sure you configure it in such a way as to meet your needs... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] SSD drives are really fast running Dovecot
Quoting David Jonas djo...@vitalwerks.com: I've been considering getting a pair of SSDs in raid1 for just the dovecot indexes. While raid-1 is better than the raid-0 of the previous poster, do you really want to slow down your fast SDDs with software raid-1 on top of them? The hope would be to minimize the impact of pop3 users hammering the server. Proposed design is something like 2 drives (ssd or platter) for OS and logs The OS is mostly cached, so any old drive should work. Logs are lots of writes, so a dedicated drive might be nice (no raid). Be sure to tune the OS for the logs... 2 ssds for indexes (soft raid1), I'd probably just go with one drive, maybe have a spare as a cold- or hot-spare in case of failure. 12 sata or sas drives in RAID5 or 6 (hw raid, probably 3ware) for maildirs. The I'd say either raid-6 or raid-10 for this, depending on budget and size needs. indexes and mailboxes would be mirrored with drbd. Seems like the best of both worlds -- fast and lots of storage. drbd to where at what level? There was a some other discussion about this which basically said Don't use drbd to mirror between VM guests which I agree with. If you want to do this, use DRBD between VM servers (physical hosts) and not between VM guests (virtual hosts). I do use a VM cluster with DRBD between the physical hosts, but not for mail services, and it works fine. Doing DRBD inside the virtual hosts though would not be good... Does anyone run a configuration like this? How does it work for you? No. I do 2 nodes, with DRBD between them, using GFS on them for both the mbox files and indexes... No virtualization at all... No SSD drives at all... Anyone have any improvements on the design? Suggestions? Only my advice about where to drbd if you are virtualizing, and what raid levels to use... But these are just my opinions and your milage may vary... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Jonathan Tripathy jon...@abpni.co.uk: I'm hearing different things on whether dovecot works well or not with GFS2. Dovecot works fine with GFS2. The question is performance of Dovecot on GFS2. I do dovecot on GFS2 (with mbox instead of maildir) and it works fine for my user load... Your userload may vary, and using maildir may make your results different than mine. Of course, I could simply replace the iSCSI LUN above with an nfs server running on each DRBD node, if you feel NFS would work better than GFS2. Either should work. I'd use GFS2 myself, unless you have some compelling reason not to... Either way, I would probably use a crossover cable for the DRBD cluster. I use 2 1Gb links bonded together, over crossover cables... Could maybe even bond 2 cables together if I'm feeling adventurous! Yes, recommended. That is what I do on all my clusters. The way I see it, is that there are 2 issues to deal with: 1) Which Shared Disk technology is best (GFS2 over LUN or a simple NFS server) and 2) What is the best method of HA for the storage system Any advice is appreciated. Best is relative to workload, budget, expectations, environment, etc. And sometimes, it is just a religious thing. So I don't think you will get much of a consensus as to which is best since it really depends... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Jonathan Tripathy jon...@abpni.co.uk: Either way, I would probably use a crossover cable for the DRBD cluster. I use 2 1Gb links bonded together, over crossover cables... Could maybe even bond 2 cables together if I'm feeling adventurous! Yes, recommended. That is what I do on all my clusters. How do you bond the connections? Do you just use Linux kernel bonding? Or some driver level stuff? Linux kernel bonding, mode=4 (IEEE 802.3ad Dynamic link aggregation). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Henrique Fernandes sf.ri...@gmail.com: for drbd you only need a heartbeat i guess. Fencing is not needed for drbd, though recommended. But to use gfs2 you need fence device, ocfs2 does not require once, like the ocfs2 driver takes care, it reboots if it thinks it is desyncronized gfs2 technically requires fencing, since it technically requires a cluster, and red hat clustering requires fencing. Some people get around this by using manual fencing, though this is not recommended for production as it could result in a machine staying down until manual intervention, which usually conflicts with the uptime desire for a cluster... But that is up to the implementor to decide on... []'sf.rique -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Best Cluster Storage
Quoting Jonathan Tripathy jon...@abpni.co.uk: Linux kernel bonding, mode=4 (IEEE 802.3ad Dynamic link aggregation). I'm guessing that since you're using a cross over cable, by just setting up the bond0 interfaces as usual (As per this article http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html), you didn't need to do anything else, since there is no switch? I use it via cross-over for my DRBD replication, I also use it to a switch for my public interfaces. For the cross-over, just configure on each linux node. For the public interface to the switch, just configure it the same way on each linux box, then also configure the switch for bonding. Not rocket science either way on the linux end. Depending on your switch, it _might_ seem like rocket science on the switch end, if using a switch. ;) So, in answer to your question, no, I don't need to do anything else with the crossover cable implementation. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Maildir feature I'd like to see - SSD for newer messages
Quoting Marc Perkel m...@perkel.com: Some new SSDs use SATA 3 (6gb/sec) with 355/mb/sec read speeds and 215MB/sec write. Put these in raid 0 and it screams! Can you imagine how fast that would be? I'd never raid-0 anything important... What would be nice is if new email were on faster drives with old email being migrated to larger mechanical cheaper storage. Perhaps messages over a month old? From dovecot's perspective it would sort of all look the same but maybe one a week a script would run migrating older messages to slower media. I used to do something similar in that the user's inbox was on fast disk, and all their other folders (assumes IMAP for the most part) were on slower disks. A cronjob would run once a month that locks the inbox, selects and mail older than 6 months, and moves it to a folder called old-mail -- thus migrating any 6+ month old mail from fast to slow storage... I suppose you could adapt that, maybe with a shorter time period (1 month would seem okay, not sure about anything shorter). I know this isn't exactly what you want or asked for, but it is an idea based on a past implementation which worked well. I'm not sure what it would take to make dovecot seamlessly access email from two different locations or if this is practical. Just wanted to throw the idea out there to see if something sticks. Well, different folders make it a snap... If you don't want to re-folder, then it may not be so easy (I'll let someone else answer that). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Question about slow storage but fast cpus, plenty of ram and dovecot
Quoting Stan Hoeppner s...@hardwarefreak.com: Also, due to the potential size of the index files (mine alone are 276 MB on an 877 MB mbox), you'll need to do some additional research to see if this is a possibility for you. That's rather high based on my users... My largest user has 110M of indexes. The next highest users are 54M, 52M, 43M, 38M, 32M, 30M, 27M, 23M, 22M, and then tons of users in the teens... So your situation doesn't seem to be the norm... I guess it depends on your site (users, quotas, number of folders per user, etc). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Question about slow storage but fast cpus, plenty of ram and dovecot
Quoting Javier de Miguel Rodríguez javierdemig...@us.es: - I understand that indexes should go to the fastest storage I own. Somebody talked about storing them in a ramdisk and then backup them to disk on shutdown. I have several questions about that: - In my setup I have 25.000+ users, almost 7.000.000 messages in my maildir. How much memory should I need in a ramdisk to hold that? Are you using dovecot with them now? If so, then you can figure out who much they are currently using. Otherwise, well, who knows? It will depend on the clients used (for dovecot.index.cache), as well as how often they are accessed (for transaction log), and so on. Maybe Timo or someone more into the inner workings can give more details. - What happens if something fails? I think that if I lose the indexes (ej: kernel crash) the next time I boot the system the ramdisk will be empty, so the indexes should be recreated. Am I right? Yes. If the ramdisk is full, it will switch to INDEX=MEMORY automatically for all new sessions until space frees up. And if you crash without saving the indexes, it will rebuild them when the users reconnect. - If I buy a SSD system and export that little and fast storage via iSCSI, does zlib compression applies to indexes? I don't think so, but maybe Timo can say for sure. - Any additional filesystem info? I am using ext3 on RHEL 5.5, in RHEL 5.6 ext4 will be supported. Any performance hint/tuning (I already use noatime, 4k blocksize)? Only the ext3 commit interval (raising it might lower your I/O on the SAN) I mentioned earlier... But of course, it raises your chances of losing data in a crash (i.e. you could lose more data, since it flushes less often). But it is a good trade off sometimes (I always raise it on my laptops in order to cut down on battery usage). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Question about slow storage but fast cpus, plenty of ram and dovecot
Quoting Stan Hoeppner s...@hardwarefreak.com: snipped good bare metal recommendations Eric you missed up above that he's running Dovecot on an ESX cluster, so SSDs or any hardware dedicated to Dovecot isn't possible for the OP. Well, it is true I know nothing about vmware/ESX. I know in my virtual machine setups, I _can_ give the virtual instances access to devices which are not used by other virtual instances. This is what I would do. Yes, it is still virtualized, but it is dedicated, and should still perform pretty well -- faster than shared storage, and in the case of SSD faster than normal disk or iscsi. Javier, email is an I/O intensive application, whether an MTA spool, an IMAP server, or POP server. The more concurrent users you have the greater the file I/O. Thus, the only way to decrease packets across your iSCSI SAN is to increase memory so more disk blocks are cached. He was already asking about throwing memory at the problem, and I think he implied he had a lot of memory. As such, the caching is there already. Your statement is true, but it is also a zero config option if he really does have lots of memory in the machine. But keep in mind, at one point or another, everything has to be written to disk, or deleted from disk. So, while you can decrease disk *reads* by adding memory to the VM, you will never be able to decrease writes, you can only delay them with things like write cache, or in the case of XFS, the delaylog mount option. These comments refer to mail file I/O. And in ext3, the flush rate. Good point, that I forgot about. It is set to a very small value by default (2-3 seconds maybe), and can be increased without too much danger (to say 10-30 seconds). IMAP is a very file I/O intensive application. As Eric mentioned, you could put your user *index* files in a RAM disk or make them memory resident via Dovecot directive. This would definitely decrease disk reads and writes quite a bit. Also as Eric mentioned, if you reboot you lose the indexes, and along with them Dovecot's key performance enabler. User response times will be poor until the indexes get rebuilt. Assuming normal downtime stats, this would still be a huge win. Since the machine rarely goes down, it would rarely need to rebuild indexes, and hence would only run poorly a very small percentage of the time. Of course, it could run _very_ poorly right after a reboot for a while, but then will be back to normal soon enough. One way to help mitigate this if using a RAM disk is have your shutdown script flush the RAM disk to physical disk (after stoping dovecot) and the reload it to RAM disk at startup (before starting dovecot). This isn't possible if you use the dovecot index memory settings though. If this is a POP server, then you really have no way around the disk I/O issue. I agree. POP is very inefficient... So, either: 1. Increase memory and/or 2. Move indexes to memory #1 will be less effective at decreasing I/O. #2 will be very effective, but at the cost of lost indexes upon reboot or crash. Still some room for filesystem tuning, of course, but the above two options are of course the ones that will make the largest performance improvement IMHO. -- Stan -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Question about slow storage but fast cpus, plenty of ram and dovecot
Quoting a...@test123.ru: Guys. Who is interested in obvious reasoning? The same people who are interested in vague questions? Let me remind original concrete question. I am also interested. We can exchange CPU RAM to minimize disk i/o. Should we change to dovecot 2.0? Maybe mdbox can help us? Maybe ext4 instead of ext3? Uhm, well, again, depends on your needs. Pop3? Imap? Both? Number of accounts? Can't really help without more details. Maybe I can't help with more details either, but that is a risk you take on a mailing list. 1. Is migration to dovecot 2.0 good idea if I want to decrease I/O? Depends on what version you run now really. But I would recommend it anyway just on principle. 2. Can mdbox help decrease IO? 3. What is better for mdbox or maildir - ext3 or ext4? Dont' know. But you can certainly tune the FS in either case (atime/dtime, flush rate, external journal, etc). Some will say XFS is better, etc. Besides, you can hardly decide the best FS until you know the mailbox format (mbox, maildir, mdbox, etc). If you want concret answers, you need concret questions... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Question about slow storage but fast cpus, plenty of ram and dovecot
Quoting javierdemig...@us.es: in our vmware esx cluster. We want to minimize disk I/O, what config options should we use. We can exchange CPU RAM to minimize disk i/o. Depends on what you are doing -- pop3, imap, both, deliver or some other LDA? Do you care if the indexes are lost on reboot or not? You might try putting the indexes in, memory (either via dovecot settings or a RAM DISK) or on SSD. You could also try using ext3/4 with an external journal on a SSD. SSD would preferably be an enterprise SSD, but it could be a lesser SSD, or even a USB memory stick (replaced periodically). Which is right depends on your needs and budget. Failing that, you should probably put the indexes on the fastest disks possible (might be local, might be iscsi, you'd have to benchmark). Should we change to dovecot 2.0? For any new system, I'd start with the most recent dovecot 2.x version. How easy that is if you are upgrading depends on what version you run now. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] need to block user by IP address (tried denyhosts, xinetd, iptables etc)
Quoting David Ford da...@blue-labs.org: I'm not a proponent of fail2ban as I think going straight to the horse's mouth is wiser (keep it all in iptables in the first place). I'm not a fan of fail2ban (tail/grep a log file, really?) but there are other options which do this kind of thing better and still allow iptables/routing to handle the issue. I agree with Stan that your VPS provider is on the wal-mart list. If no other solution avails, code up a quick little ditty that does the actual socket listen. If the incoming IP matches an allow list, hand it off to dovecot as an exec(), if not, deal with it as you see fit - normally, dropping the packet on the floor. That is a fine solution, if it meets their package requirements. If not, then something like pam_shield or a similar package may due. But even then, those types of packages may not meet the site's packaging requirements. I can't believe a company with a packaging requirement run a Fedora though. That seems incongruous to me... Seems like they only have half a clue... -david -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] dovecot as exchange server?
Quoting Jacek Osiecki jos...@hybrid.pl: No, the only thing I need is to simulate an Exchange server, so iPhone would be able to retrieve all emails instead of last 200 from INBOX :) Basically he is complaining that when you select other as the e-mail server type, it only allows you to BY DEFAULT see the 50/100/150/200 most recent messages in a folder. He noticed that when you select Exchange instead of other this limit isn't there (it is then actually controlled by exchange, and not the iphone). This is simply an iphone feature... Surely he can try the other options (gmail, exchange, etc) and see if he can get any to work with his server, but that is not something I am interested in. Now, as a possible work around, if the user doesn't have too much mail. Scroll to the bottom of the 200 messages, and you will see at the end it says Load More Messages... N messages total, M unread where N and M are your message counts. Assuming those numbers are small, you can just keep activating that until you have them all. This would be fine if you have a hundreds of messages, or maybe a thousand or so. You wouldn't really want to do this if you have several thousand though. If that doesn't meet your needs, contact Apple, as this is purely an Apple/iPhone UI issue, and not a mail server issue... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Problems with masteruser
Quoting Неворотин Вадим nevoro...@gmail.com: It's look like a big bug. As I understang there shouldn't be any different between logging in with masteruser and normal log in. But in my system I can't use masteruser at all due to IMAP errors. It works for me, with two exceptions: 1) The acl issue I mentioned. 2) It doesn't work right in my webmail for anything but the e-mail part, since the webmail retains the user as master*real instead of just real. So it does log me in and show me the mail, but everything else (preferences, filters, address book, etc) don't work right. The webmail has hooks which should allow me to fix this, but I've not had time to figure that out yet. So basically, it works for me, which just two little annoyances (one is dovecot specific, the other is actually my webmail and not dovecot). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Problems with masteruser
Quoting ? ? nevoro...@gmail.com: I have very strange problem with masteruser. See two logs below: I can't help, but I can add my observations... Using dovecot 1.2.11 and master users, I noticed that if I login with to a user (real-user) using the master user (master-user), then the mailbox listing shows all non-acl mailboxes fine, but for acl-controlled mailboxes it shows those for which master-user has access, not those for which real-user has access. This really freaked me out the first time I logged in and a shared folder showed up when it shouldn't have. I thought I had shared it with everyone! But I was able to verify that a real login to real-user doesn't see the shared folder, while a master login to real-user does see it. So it is the master user login that is messing up the acl checks. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] body search very slow since upgrade from 1.0.15 to 1.2.10
Quoting Stan Hoeppner s...@hardwarefreak.com: Are you using any FTS plugins? Squat? Nope, not as far as I know. Dovecot -n lists the following plugins: mail_plugins(default): zlib acl imap_acl mail_plugins(imap): zlib acl imap_acl mail_plugins(pop3): zlib mail_plugin_dir(default): /usr/lib64/dovecot/imap mail_plugin_dir(imap): /usr/lib64/dovecot/imap mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3 plugin: acl: vfile:/var/dovecot/acls acl_shared_dict: file:/var/dovecot/indexes/shared_mailboxes And are you sure you're doing full body searches, not just headers only? Yes. Header searches are much faster. :) -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] **OFF LIST** Re: body search very slow since upgrade from 1.0.15 to 1.2.10
Quoting Stan Hoeppner s...@hardwarefreak.com: I guess you didn't have enough (any?) people testing mbox body search before the v1.2 release. Is everyone but me using maildir? Makes me wish I had an extra box so I could do dovecot devel version testing against mbox. I'm running mbox with 1.2 and not seeing any problems... But that may be because I threw a lot of hardware at it? -- Stan -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] **OFF LIST** Re: body search very slow since upgrade from 1.0.15 to 1.2.10
Quoting Stan Hoeppner s...@hardwarefreak.com: I'm running mbox with 1.2 and not seeing any problems... But that may be because I threw a lot of hardware at it? Hi Eric. Not sure if even fast hardware searching 11,000+ message count mbox'en without an FTS plugin would give speedy results, given Timo's discovery of earlier today. Can't say, as I don't personally have a 11K+ message mbox handy that I can run the test on. I just don't keep that much mail around of my own... But it works okay on my 4K to 5K message mbox files, which are the largest I have... Usually takes about 1 second per 1K messages, so about 4 seconds for the 4K mbox, 5 seconds for the 5K mbox, etc. Of course, a bit slower when the server is overly busy... I've had it take 10 or 12 seconds before... I do have users with large mbox files (12K+) who have never complained, but that doesn't mean much... I don't know which ones do body searches, or how often they do them, etc. And not all users complain even when there is a slowness or outright problem... So, no scientific data... But no complaints here... Then again, I only have a small number of users with mboxes that large... We discourage that kind of thing here... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Highly Performance and Availability
Quoting Stan Hoeppner s...@hardwarefreak.com: - Add redundancy to the storage using DRDB (I believe a successful strategy with Dovecot is pairs of servers, replicated to each other - run each at 50% capacity and if one dies the other picks up the slack) DRDB is alright for a couple of replicated hosts with moderate volume. Not sure how you define moderate load... Seems like in a 2 node cluster it does a nice job for fairly high load, as long as it is setup correctly. Kind of like what you say about the SAN though, the faster the DRBD interconnect, the better it can handle the load (100Mb, 1Gb, 10Gb, other methods, etc). If you run two load balanced hot hosts with DRDB, and your load increases to the point you need more capacity, a 3rd hot host, expanding with DRDB gets a bit messy. Very much so... I'm running GFS on them, and if I need to add more hosts I'll probably do it via GNBD instead of adding more DRBD connections... Growing by adding more DRBD doesn't seem desirable in most cases, but growing by sharing the existing 2 DRBD machines out (NFS, GNBD, Samba, iSCSI, etc) seems easy, and if the additional machines don't need to raw disk speed it should work fine. If the new machines need the same raw disk speed, well, then you either are going to have to do a complex DRBD setup, or go with a more proper SAN setup. With an iSCSI or FC SAN you merely plug in a 3rd host, install and configure the cluster FS software, expose the shared LUN to the host, and basically you're up and running in little time. Not much different in effort/complexity than my solution of using GFS+GNDB to grow it... But surely better in terms of disk performance to the newly added machine... RedHat claims GNBD scales well, but I've not yet been able to prove that. All 3 hosts share the exact same data on disk, so you have no replication issues If you have no replication issues, you have a single point of failure... Which is why most SAN's support replication of some sort... no matter how many systems you stick into the cluster. The only limitation is the throughput of your SAN array. Or licensing costs in some cases... Eric Rostetter is already using GFS2 over DRDB with two hot nodes. IIRC he didn't elaborate a lot on the performance or his hardware config. He seemed to think the performance was more than satisfactory. I've posted the hardware config to the list many times in the past... The performance is very good, but due to price restrictions it is not great. That is because the cost of building it with 15K SAS drives was 3x the cost of using SATA drives, so I'm stuck with SATA drives... And the cost of faster CPU's would have pushed it over budget also... The SATA drives are okay, but will never give the performance of the SAS drives, and hence my cluster is not what I would call very fast. But it is fast enough for our use, which is all that matters. If we need in the future, we can swap the SATA out for SAS, but that probably won't happen unless the price of SAS comes way down, and/or capacity goes way up... Eric, can you tell us more about your setup, in detail? I promise I'll sit quiet and just listen. Everyone else may appreciate your information. I have two clusters... One is a SAN, the other is a mail cluster. I'll describe the Mail cluster here, not the SAN. They are the same exact hardware except for the (number, size, configuration) of disks... I get educational pricing, so your costs may vary, but for us this fit the budget and a proper SAN didn't. 2 Dell PE 2900, dual quad-core E5410 Xeons at 2.33 GHz (8 cores), 8GB RAM, Perc 6/i Raid Controller, 8 SATA disks (2 RAID-1, 4 RAID 10, 1 JBOD, and 1 Global Hot Spare), 6 1Gb nics (we use nic bonding so the mail connections use one bond pair, and the DRBD traffic uses another bond pair... the other two are for clustering and admin use). Machines mirror shared GFS2 storage with DRBD. Local storage is ext3. OS is CentOS 5.x. Email software is sendmail+procmail+spamassassin+clamav, mailman, and of course dovecot. Please don't flame me for using sendmail instead of your favorite MTA... The hardware specs are such that we intend to use this for about 10 years... In case you think that is funny, I'm still running Dell PE 2300 machines in production here that we bought in 1999/2000... We get a lot of years from our machines here... We have a third machine in the cluster acting as a webmail server (apache, Horde software). It doesn't share any storage though, but it is part of the cluster (helps with split-brain, etc). It is a Dell PE 2650 with dual 3.2 Ghz Xeons, 3GB RAM, SCSI with Software Raid also running CentOS 5. Both of the above machines mount home directories off the NAS/SAN I mentioned. So the webmail only has the OS and stuff local, the Mail cluster has all the inboxes and queues local (but not other folders), and the NAS/SAN has all the home directories (which includes mail folders other than
Re: [Dovecot] GlusterFs - Any new progress reports?
Quoting Steve stev...@gmx.net: My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use GFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running... Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is Not really, no. You can have those two nodes distribute it out via gnbd though... Red Hat claims it scales well, but I've not yet tested it... Can all the nodes at the same time be active or is one node always the master and the other a hot spare that kicks in when the master is down? The free version of DRBD only supports max 2 nodes. They can be active-active or active-passive. The non-free version is supposed to support 3 nodes, but I've heard conflicting reports on what the 3rd node can do... You'd have to investigate that yourself... I'm not interested in it, since I don't want to pay for it... (Though I am willing to donate to the project) My proposed solution to the more-than-two-nodes is gnbd... If that doesn't meet your needs, then DRBD probably isn't the proper choice. You didn't mention anything about number of nodes in your original post, IIRC. Thanks Ed W -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] GlusterFs - Any new progress reports?
Quoting Steve stev...@gmx.net: I have already installed GFS on a cluster in the past, but never on DRBD. Me too (I did in on a real physical SAN before). Hmm... when I started with GlusterFS I thought that using more then two nodes is something that I will never need. GlusterFS is really designed to allow such things... So is GFS. But these are filesystems... DRBD isn't really designed to scale this way. A SAN or NAS is. But now that I have GlusterFS up and running and I am using more then two nodes I really see a benefit in being able to use more then two nodes. For me this is a big advantage of GlusterFS compared to DRBD. You are comparing filesystems to storage/mirroring systems. Not a valid comparison... My proposed solution to the more-than-two-nodes is gnbd... Never heard of it before. Don't like the fact that I need to patch the Kernel in order to get it working. GNDB is a standard part of GFS. No more patching than GFS or DRBD in any case... Red Hat and clones all come with support for GFS and GNDB built in. DRBD is another issue... GNDB should be known to anyone using GFS, since it is part of the standard reading (manual, etc) for GFS. If that doesn't meet your needs, then DRBD probably isn't the proper choice. You didn't mention anything about number of nodes in your original post, IIRC. I did not post the original post. I just responded to the original post saying that GlusterFS works for me. I didn't mean to single you out in my reply... Assume the you is a generic you, not specifically aimed at any one individual... Sorry if I miss-attributed anything to you... Very busy, and trying to reply to these emails as fast as I can when I get a minute or two of time, so I may make some mistakes as to who said what... I'm not trying to convert or convince any one... I'm just replying and expressing my experiences and thoughts... If glusterfs works for you, then great. If not, there are alternatives... I happen to champion some, others champion others... Personally, I like SAN storage, but the price has always kept me from using it (except once, when I was setting it up on someone else's SAN). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] auth processes
Quoting Stan Hoeppner s...@hardwarefreak.com: Timo Sirainen put forth on 2/18/2010 12:54 PM: Hmm. You could try setting auth_worker_max_request_count=1 to see if that gets rid of the processes after they've handled the request. Restarting IMAP/POP3 mail server: dovecotError: Error in configuration file /etc/dovecot/dovecot.conf line 1: Unknown setting: worker_max_request_count Fatal: Invalid configuration in /etc/dovecot/dovecot.conf FYI I'm running 1.2.10 -- Stan Could be a typo, could be your problem, but: auth_worker_max_request_count != worker_max_request_count (i.e., did you forget the auth_ at the start?) -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] GlusterFs - Any new progress reports?
Quoting Ed W li...@wildgooses.com: Anyone had success using some other clustered/HA filestore with dovecot who can share their experience? (OCFS/GFS over DRBD, etc?) GFS2 over DRBD in an active-active setup works fine IMHO. Not perfect, but it was cheap and works well... Let's me reboot machines with no downtime which was one of my main goals when implementing it... My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use GFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running... Thanks Ed W -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] So, what about clustering and load balancing?
On 14.2.2010, at 3.31, Stan Hoeppner wrote: I can't see how the metadata sharing of say GFS2 is going to create any serious performance impact on a cluster of dovecot servers using GFS2 and a shared SAN array especially if using maildir. While I use GFS shared on a cluster with dovecot for mbox and think the performance rocks, this really depends on the setup (mbox may be worse than maildir, how much hardware you have for it, how you load balance, etc) and what you consider good/bad performance (what is fast to me might be slow to you) as well as of course scale (might work for 2K users, but what about for 200K or 1.5M). If the load balancing is implemented correctly and a given user is only hitting one dovecot server at any one point in time, there should be few, if any, shared file locks. Thus, no negative impact due to shared locking. This ignores the delivery of mail to the user (again, not so bad for maildir but a killer for mbox). If the delivery is on a separate box than dovecot your can have lock contention... Also there may be other things to cause lock contention like backups, admin cron jobs to check things, etc. Anyway, I run dovecot in a cluster without any issue, but that is because of my client base and performance expectations (and some real nice hardware). -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] So, what about clustering and load balancing?
Quoting Stan Hoeppner s...@hardwarefreak.com: Eric Rostetter put forth on 2/13/2010 8:39 PM: This ignores the delivery of mail to the user (again, not so bad for maildir but a killer for mbox). If the delivery is on a separate box than dovecot your can have lock contention... You attach the inbound MTA to the FC switch, export the LUN with the GFS2 filesystem and drop new mail to the appropirate folder(s). Which is the same folder/file the imap reads. So you have the MTA delivering to the folder/file, and the dovecot server accessing the same, and hence you have lock contention. The dovecot cluster machines pick it up just as if it were on a local filesystem. This can be done very easily with with mbox or maildir and there's no more potential for lock contention than the imap files. If the MTA is delivering to an mbox file, and dovecot is reading/writing to the same mbox file, and the MTA and dovecot are on different machines, then you DO have lock contention. For maildir, less so (to the point it is negligible and not worth talking about really). Also there may be other things to cause lock contention like backups, admin cron jobs to check things, etc. You have all these things with a non clustered filesystem and have to deal with them there anyway, so there's really no difference is there? No, to really... Anyway, I run dovecot in a cluster without any issue, but that is because of my client base and performance expectations (and some real nice hardware). Tell us more about your hardware setup if you don't mind. See the archives for details... Basically a 3 node cluster doing GFS over DRDB on 2 nodes in an active-active setup.. Those two nodes do MTA and dovecot (with DRBD+GFS), the 3rd does webmail only (without DRBD/GFS). They hold only the INBOX mbox files, not the other folders which are in the user's home directory. Home directories are stored on a separate 2-node cluster running DRBD in an active-passive setup using ext3. The mail servers are front-ended by a 2-node active-passive cluster (shared nothing) which directs all MTA/dovecot/httpd/etc traffic. I use perdition to do dovecot routing via LDAP, the MTA can also do routing via LDAP, and I use pound to do httpd routing. Right now it is just a few nodes, but it can grow if needed to really any number of nodes (using GNBD to scale the GFS where needed, and the MTA/Dovecot/httpd routing already in place). Right now, it rocks, and the only thing we plan to scale out is the webmail part (which is easy as it doesn't involve any drbd/gfs/mta/dovecot, just httpd and sql). Also, when do you plan to move to GFS2? Sorry, it is GFS2... Has been since I set it up... In fact, we delayed the project until GFS2 was available (I think it was like 6 months or something). Thanks. -- Stan -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] scalability
Quoting Jonathan Tripathy jon...@abpni.co.uk: How scalable is dovecot. Very, but also it will vary by how you implement it (mbox or maildir, local or remote disk, etc). Do you think a machine with a dual core 2.8Ghz processor and 2GB of RAM would do a business with 600 users? Yes, unless something else is slowing it down. This machine will also be running Postfix If it is only running Postfix, it will probably be okay most of the time, but if Postfix gets slammed it may slow down dovecot (actually the machine) for a bit... But if in addition to postfix it is doing (local) anti-virus and anti-spam and such, it may not be sufficient, depending on setup. In my experience, dovecot takes almost no resources, Postfix/sendmail on take excessive resources occasionally when a spammer/hacker is hitting the machine hard, and anti-spam/anti-virus programs can bring any system to its knees... Your help is appreciated To few details to give any definitive answers... Cheers Jonny -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Dovecot SSL issues
Quoting Tom Hendrikx t...@whyscream.net: * Trying protocol imap/ssl, Port 993: If you use imap/ssl/novalidate-cert it will ignore the mismatch. From what I understand, it doesn't like the certificate. The cerificate is fine, just a hostname mismatch as Tom Hendrikx said. This error is harmless, but you could setup dovecot to listen for both ssl and non-ssl connections, and setup your webmail to use the non-ssl connection: ssl over localhost is probably a waste of cpu cycles. True. Or add /novalidate-cert, which would remove the error, but still consume the cycles. The novalidate-cert would also ignore any self-signed certificate warnings... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] dovecot user
Quoting David Halik dha...@jla.rutgers.edu: Well, I don't know how you feel about it, but you could always go with something similar to what courier does and call it doveauth while keeping the real dovecot user for the reset of the processes. +1 -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Upgrade path questions
Quoting Thomas M Goerger t...@umn.edu: We are currently running Dovecot v1.1.6 on our servers, and are contemplating an upgrade to 1.2, or 2.0 soon. We are wondering how many organizations are still running a 1.1 version of Dovecot, and if anyone has any thoughts on this transition. Probably a lot running 1.1 still. I only upgraded late last year because I needed additional functionality (see below), otherwise I'd still be on 1.1 now. Have you upgraded from 1.1 to 1.2? What are your experiences with this? Yes, and it was painless. All the info you need is on the dovecot wiki. Have you upgraded from 1.1 to 2.0 directly? No. What are your experiences this way? We are also running an environment with both mbox and maildir formats. How many of you are running similarly, or are running solely maildir or mbox? We ran solely mbox with 1.1. The reason I upgrade to 1.2 was to be able to run primarily mbox, but also run a few maildir accounts for shared folders with per-user flags with easy time-based expiration (retention policy). So we just have a couple maildir accounts used exclusively as shared-folder accounts (with per-user-flags and auto-expiration of messages after a set rention period) and the rest are still mbox ( with no retention/expiration, no shared folders, etc) If it wasn't for the desire to do the shared folders with expiration, we'd have stayed with 1.1 still. We're just looking to gather information going forward, and anything you might be able to contribute would be very helpful. It was a painless process for me. No problems. We uprgaded on CentOS 5.x using RPM packages from atrpms.net, and going from dovecot-1.1.18-1_95.el5.x86_64.rpm to dovecot-1.2.4-0_99.el5.x86_64.rpm if that's any help. :) Thanks! * * Tom Goerger - Email/Unix System Administrator * * * * University of Minnesota Email: t...@umn.edu * * Operations, Infrastructure and Architecture Phone: 4-5804 * * Internet ServicesOffice: 626J WBOB* * * * -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] First time Dovecot user, really impressed so far. What is best IMAP enabled webmail package to go with Dovecot?
Quoting Stan Hoeppner s...@hardwarefreak.com: So, what's the best FOSS IMAP enabled web mail front end with a modern look/feel? I'd like to run it on lighttpd, which I'm already using, not apache. www.horde.org would work (there are debian ports). As to which is best, depends on who you listen to. :) -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] First time Dovecot user, really impressed so far. What is best IMAP enabled webmail package to go with Dovecot?
Quoting Stan Hoeppner s...@hardwarefreak.com: I've looked a little at both now and am still reading. One thing I don't like is that I'm seeing requirements a SQL server. That adds unnecessary complexity to the system and I'd rather avoid it if possible. IIRC, one of the You can/could run Horde/IMP without a SQL DB. Some Horde modules may not work without it, but the basic e-mail functionality can be used without any SQL DB. Some features may also be slower without it, but that doesn't mean it won't work or be useful without it. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Multiple postfix + single dovecot - optimal configuration
Quoting Tomek ti...@o2.pl: For now I don't know how to glue these severs: - if I should install dovecot on both postfix and use third servers as a NFS share? This will work, but doesn't scale well and performance problems may be hard to pinpoint. You have to be very careful when using NFS like this, but it is possible and fairly easy conceptually. - or maybe it is possible to communicate with dovecot on third server via LMTP? No idea... You could surely configure the two postfix machines to deliver to the dovecot server via LMTP. Very similar to your next version below if used that way. - or should I run third postfix daemon (without antyspam etc - simple configuration) on the same machine as dovecot only for delivering local mails? You could do that. Scales better IMHO than option one... I ran this way for years (but don't anymore). You could also cluster the machines and use a cluster filesystem, but that may be outside your comfort zone. I will be grateful for any advice, howto or good practices example. Depends on a lot of factors, so in the end you will have to decide... Good luck with it. -- Regards, Thomas. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] HA Dovecot Config?
Quoting Hugo Monteiro hugo.monte...@fct.unl.pt: Anyone used FileReplicationPro? I'm more interested in low bandwidth, 'cheaper', replication. Rick If data consistency isn't a must, you can always perform timed rsyncs. Yes, but if there is a lot of data, FileReplicationPro is more efficient than an rsync. Of course, rsync is cheaper... ;) R's, Hugo Monteiro. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] HA Dovecot Config?
Quoting Rick Romero r...@havokmon.com: They claim to be real-time, and with kernel hooks into reads/writes, it seems promising to run on top of the OS.. Yes, but that doesn't mean it will work in the cluster environment... It also says things like: There are different types of file locks. In addition to these differences, each OS has its own set of rules regarding file locks as well. FRP's replication of locked files will vary in relation to the way the OS treats these types. However, one rule is consistent throughout all operating systems in that exclusive file locks will prevent FRP from replicating data. See that last line there??? Want to bet something in the mail setup (MTA, dovecot, system jobs like backup, etc) put an exclusive lock on files from time to time? Again, a fine backup system, and depending on your needs it might be okay for a failover setup, but not for an active-active setup. For that you need a lock manager, which they promise in the future but don't deliver yet... Rick -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] HA Dovecot Config?
Quoting Rick Romero r...@havokmon.com: See that last line there??? Want to bet something in the mail setup (MTA, dovecot, system jobs like backup, etc) put an exclusive lock on files from time to time? I thought the entire reason (ok not the _entire_ reason but..) for Maildir was to avoid locks and allow easy concurrent access to a mailbox through many different applications. Okay, so if you are real careful (use maildir instead of some other format like mbox, don't use it for your dovecot index/control files, don't do native backups but use a snapshot, don't involve any other programs which might introduce locks, etc. then you can probably get away with it. But I still wouldn't bet on it. So while I do agree that file locking is a possible problem in an active/active setup with FRP, I think it's possible - as long as the admin is aware of the risks. And the ways to avoid locking... And assuming locking is the only issue (other issues may arise depending on implementation). Yes, there are definitely things to be aware of. I'm not sure if this is the place to hash it out, but I think that while it's not a cheap solution it may fit MY environment - and possibly others who use Maildir. OP didn't specify many details, so assumptions were made... On my system, I share mbox files and dovecot indexes, and they both having locking... I guess everyone is as cheap as I am and hasn't set it up yet :) Uhm, DRBD is free and this suggested alternative isn't AFAIK, so I don't think it is being cheap... I'm thinking it is more about reputation and install bases... Rick -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] HA Dovecot Config?
Quoting dove...@corwyn.net: As I look at upgrading the mail server, I'd like to change to a higher availability configuration (where the server can fail and I don't have to reconfig my imap users). Sounds like a great plan! For the SMTP that's easy, because I can use multiple MX records, and I can redirect a port forward from one server to another. But IMAP doesn't have the same functionality, because it's the storage that matters. True for SMTP as long as you handle the SSL certificate issue, or don't use encryption. For IMAP to be truely HA, you will need shared storage of some sort. What's the best way to do that? Clustered servers using a SAN? NAS? some sort of appliance in front? Suggestions? There is no single best way since it will depend on your budget, skills, etc. Certainly a SAN is one way to go, and allows for possibly active-active setups (depending on file system) and great flexibility. You could also try that with NAS if you are careful enough. SAN is more complex to setup than a NAS, but NAS is harder to setup correctly for dovecot than a SAN would be, so flip a coin there. You can emulate a SAN with something like DRBD if budget doesn't allow a real SAN (that is what I do). Or you could do multi-attached active-passive disk systems (external disk tray is physically connected to 2 machines in active-passive setup). Which to use depends on knowledge/skill, costs/budget, type of cluster/failover needed, vendor support if that matters to you, etc. I setup mine as a pair of redundant front-end firewalls (linux heartbeat) which connect to a trio of Red Hat Cluster Suite machines using DRBD+GFS (two nodes do DRBD+GFS and handle SMTP+POP3+IMAP4, while the third node does _NOT_ do DRBD+GFS, and simply does the webmail interface). Thanks! Rick -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Scalability plans: Abstract out filesystem and make it someone else's problem
On Aug 12, 2009, at 2:21 AM, Steffen Kaiser skdove...@smail.inf.fh-brs.de wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tue, 11 Aug 2009, Eric Jon Rostetter wrote: For a massively scaled system, there may be sufficient performance to put the queues elsewhere. Which also allows that the queue can easily have multiple machines pushing poping items. Pushing is easy. Popping can be more problematic, depending on varios factors. But on a small system, with 90% of the mail being spam/virus/malware, performance will usually dictate local/ memory file systems for such queues... Well, this discussion reads a bit like local filesystems are prone to loose data on crash. Journaling filesystems, RAID1 / 5 / 10, SANs do their job. The issue I brought up is OS caching and is not dependent on the backend really. Only real solution is redundent storage AND disabling OS caching, which is not cheap and won't be the best performance. Always a tradeoff. However, I guess that Seth and Timo look at the thing from a different point of view, Timo seems to focus on one queue - multiple accessees, whereas Seth focuses on temporary working directory. Well Timo looks at it from dovecot's point of view. I look at it from a mail server's point of view (MTA also, etc). Bye, - -- Steffen Kaiser -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iQEVAwUBSoJtbnWSIuGy1ktrAQKj9Af/ajuegRCmDRZq/E7vt3EwDxd6ob8bNaY0 bP0Vu2bs2df/GeGKbrFiOCNyq4NMADTejNie9WQMANSB8dM7qMPjdLD68rbD70+k /UIafifb0fXBlvZTrPvKHGf1grB2qb71NAXhPi0QinbCo1CSdP4+J53XssxElrYD YLpAOBpQFkZ2I3Ji1DDpS4Xu7n0lCG0nf4dB8frtGyBf7BGFis0EpudByAAOMsiJ MesR5jbz3xFD5KM62YWlOyRF/3DaOCSo1DVMg6TG+ddTyulW0mCsxKRQ01Py7khm CKp87ucG77gDR1gn341x7zbhH5TtrC1t4rRzpBBujLDcy8F0DkM4yw== =0WvU -END PGP SIGNATURE-
Re: [Dovecot] v1.1.10 released
Quoting Timo Sirainen t...@iki.fi: I've already written some unit tests in src/tests/. I don't really care if you continue them the way I started or use some other toolset. And What branch(es) should I write them for (1.0, 1.1, and/or 1.2). If multiple branches, which is most important? unless someone else is also willing to actually write the tests, I don't think you should care all that much about their arguing. How to submit them (mercurial access, patches to you or the list, or some other way). I check out the mercurial repos and see what is there, and see what I can do... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] dovecot panic after upgrade 1.0.15 to 1.1.2
Quoting Timo Sirainen [EMAIL PROTECTED]: I saw that, but the line just comment out that line is kind of confusing as to which line (or lines) I should comment out. Is it just the assert (and leave the return), or is it the whole conditional around the assert, or something else? I meant the assert itself (or change it to an i_error(recent flag bug)). Although few clients really use recent flags, so it probably doesn't matter much what you do. :) I changed the i_assert to an i_error (so I can track when it happens, and who it happens to, still). Anyway I'll try to look into why it's crashing. But it doesn't look like I can reproduce this with my imaptest tool, since it's been running for 2,5 hours now without this crash. Let me know if there is anything I can do to help. You could also see if this patch happens to help (probably not): http://hg.dovecot.org/dovecot-1.1/rev/7f5cc9e805ec I applied it, but it doesn't appear to help judging by the number of i_error log entries from my patch... Hopefully this will clear things up for the users (so they don't notice any problems). I'll let you know if not. :) -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Experience moving mailboxes from Dovecot 0.99.14 to Dovecot 1.07 = Improvement possible
Quoting Charles Marcus [EMAIL PROTECTED]: rpms for centos available on atrpms.net Sadly not for Centos 3.x, only for Centos 4/5... :( Anyone know about Dovecot 1.1.x rpms for Centos/RHEL 3.x? -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Experience moving mailboxes from Dovecot 0.99.14 to Dovecot 1.07 = Improvement possible
Quoting Charles Marcus [EMAIL PROTECTED]: On 8/6/2008, Eric Rostetter ([EMAIL PROTECTED]) wrote: Anyone know about Dovecot 1.1.x rpms for Centos/RHEL 3.x? I'd be more interested in upgrading the server to a reasonably recent version of the distro... Unfortunately, it isn't a redundant setup, so an upgrade is downtime. I've thought about doing an on-line (e.g., yum) upgrade from 3 to 4, but I'm not sure 4 would qualify as reasonably recent and it would still require a reboot, but this is an option and would get me the new dovecot rpms at least... Since there is no good way to do an on-line upgrade from CentOS/RHEL 3 to CentOS/RHEL 5, that isn't really an option at this time (too much downtime). I've also had machines that were hardware frozen at older OS versions... Though that is not the case in this instance (was for my print server I had to recently deal with). This is one huge reason why I like gentoo so much. It has nothing to do with gentoo, IMHO. As long as I update it regularly, I never have to worry about a massive update that breaks everything. Same can be said for most distros, but I can't afford the downtime of the constant upgrades which mean constant reboots... That is why people pick a enterprise solution like RHEL/CentOS, so they can have better uptime (with support) than a non-enterprise systems... I regularly have machines with 2 or 3 years of uptime before I need to reboot them for an upgrade (they are behind firewalls, in case you wonder how I get along on such old kernels). Obviously, RHEL/CentOS 3.x will end of life, and I'll need to upgrade eventually because of that, but the more I can put it off, then better... But sometimes you just need to bite the bullet, and that day may be close at hand for this server... Or, I can just roll my own RHEL/CentOS 3 rpm package also... :) Which is less work than an OS upgrade at least... Best regards, Charles -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] backup strategy
Quoting Sean Kamath [EMAIL PROTECTED]: So. . . If you want tape backups, buy ArcServer and Netware because it doesn't work on Linux? This is straying off the intended topic... Whilst the whole rsync or other on-line solution is fine for those who want that, my solution was a little more direct: use dump That's what I've always used too. (ufsdump on Solaris). Don't use dump on Linux, it won't work (so says Linus: http://dump.sourceforge.net/isdumpdeprecated.html, except the links don't work. Sigh. I can't find the direct link.). At least not on live filesystems. That's because he later retracted that statement. Dump is as reliable on linux as anywhere else. Keypoint: dump isn't reliable on a live filesystem; either unmount it or use snapshots to get a consistent dump. Of course, you have to think about your backups. But I think it's a worthwhile investment to actually think about your backups, your rotation and retention schedule, and not just trust the vendor. Agreed. And realize that no one backup system works for everyone, or every setup. You will have to think a bit to come up with a backup system that fits your own needs and resources. Sean -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] v1.1 status and benchmarks
Quoting Timo Sirainen [EMAIL PROTECTED]: Yes, Dovecot's hasn't been a very good POP3 server. With v1.1 there are several improvements that should make it a lot better, although maybe still not perfect. It may not be as good compared to some (like tpop3d) but it certainly rocks compared to wu-imapd's pop3. Your milage may vary, and be related to what you are used to using. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Will pay $500 towards a Dovecot feature
Quoting Steffen Kaiser [EMAIL PROTECTED]: On Thu, 24 May 2007, Johannes Berg wrote: You need to start distinguishing between SSH the protocol and SSH the shell implementation, afaict the protocol should allow any use like this without ever granting access to a shell, like sftp-server etc. SSH can authentificate Virtual Users against Dovecot? Get the Virtual Home directory etc.pp. How do you restict Virtual Users with the same systen uid against overwriting other Virtual User's files in S/Ftp? I learned the last time this came up that people who can't understand why overloading IMAP with other protocols is bad also can't understand how ssh authentication works or what the difference is between a protocol and an application. The fact is, 99% of the people who want to add additional protocols to the IMAP protocol just don't care about any alternatives. They want to overload the IMAP protocol and they won't consider any other options. Trying to explain to them that ssh authentication can handle virtual users is just going to result in being flamed as ignorant. I say this from experience on this list. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Will pay $500 towards a Dovecot feature = calender ?
Quoting Marc Perkel [EMAIL PROTECTED]: Yes - with the ability to ise IMAP as a connection channel you could do anything that Microsoft Exchange does and more. The idea is that you can establish a connection between any server app and any client app. Yes, this is correct, and what some people are asking for. Generally you would want it to be somewhat email related but it's wide open and doesn't have to be. But managing this becomes more complex as you add more functions and the functions diverge from email in nature. And what's wrong with unlimited functionality? Nothing. It is how you get there that matters. The problem of putting all your eggs in one basket is: 1) It doesn't scale well. 2) If you lose the basket, you loose all your eggs. 3) The basket becomes more complex to program, debug, audit, manage, document, etc. 4) The basket _may_ become slower, consume greater resources, etc. Keep in mind that one of the reasons people buy Exchange is because Exchange does things that people want. And one of the reasons so many of them complain about Exchange is because it does most of them poorly, is hard to manage and maintain, and if it breaks you lose access to everything, not just to one thing. A calendar is one of many examples. But to start with I'm thinking more in terms of controlling server side email settings. For which various protocols already exist... As already stated, unless this goes through some standards body, it probably won't be widely adopted or used... So IMHO, the place to start would be with trying to define a standard and get support for it, rather than coding non-standards-based code that will only be adopted by a few... -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Thinking Outside the Box - Extending IMAP
Quoting Johannes Berg [EMAIL PROTECTED]: On Mon, 2007-05-14 at 11:39 -0500, Eric Rostetter wrote: You can setup a ssh tunnel on the server on any port. The user then sets up to connect to that port. The authentication can be done anyway you want, or not at all. We're not talking ssh logins to the server, we're talking ssh tunneling. Actually, I was thinking ssh logins :) Huh... Not sure why, but... This sounds like it would require both ssh server modifications and e-mail client modifications. As such, you may not get a lot of buy in to your idea. At that point, you're almost half way to creating a new protocol anyway... * the imap service you provide is a pre-authenticated imap session so that authentication/encryption is in ssh. I read my mail this way all the time. * the ssh also provides a few other services that you can use Seems to me that instead of adding plugins to dovecot and the e-mail client, you've added subsystems and plugins to the ssh server and e-mail client. So you've just traded one server/client combination for another. Thus, what you get is exactly what you want: a service that provides multiple virtual services within a single existing connection. But since you've had to modify the client and server, why not just do this with any old client/server protocol? What is so special about ssh in this case? I'd rather just tunnel the imap via ssh, and use the existing ssh tunnel to do pre-auth for other services... Seems more trivial, as we're only modifying the client, not the server... But what do I know/care. I've always been happy with multiple protocols. One reason I like multiple protocols, each with their own server code, is that it scales well. I can put each service on a separate machine if I need to, I can re-prioritize them individually, I can proxy them with ease, etc. When you start jamming lots of protocols into one code base, not only is it harder to audit and debug, it is harder to scale. Yes, you can still scale with load balancers and such, but that introduces additional cost and complexity which isn't needed when the services are isolated. But, I guess not every one needs to scale, and not everyone is on the server end (and yes, things always look different from the client end). johannes -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Thinking Outside the Box - Extending IMAP
no way the METADATA solution could possible address both of those issues. METADATA is for storing and retrieving data, not for performing an action. So, yes, it is a possible solution for things like updating a blacklist or whitelist, but not for things like sendmail an e-mail (unless you want to create a batch mechanism for doing so). Andy. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Thinking Outside the Box - Extending IMAP
Quoting Marc Perkel [EMAIL PROTECTED]: Here's some thoughts I'd like to throw out there. I know it's not standard IMAP protocol but someone has to try new ideas first and I want to see what people (Timo) think of this. I don't think you will find many supporters, other than the usual crowd who always wants SMTP over IMAP support. Here's my initial thoughts on this. Suppose we extended IMAP to include an EXECUTE command as follows: You'd be better off either making these generic action verbs like add and remove (if each is meant to be self-sufficient, one-line actions), or calling them modes (if the idea is to switch contexts to some other protocol such as smtp, calendars, etc). On the server side is a config file Yes, that would be critical IMHO, or at least that there was some centralized way to control it, and not Dovecot-specific if you wanted the idea to spread past dovecot. parameters passed. For example, suppose there is a command to add a user to a server side blacklist. 100 execute blacklist add [EMAIL PROTECTED] 100 ok You'd be better off with a generic add which took paramters like: 100 add blacklist [EMAIL PROTECTED] 101 add user [EMAIL PROTECTED] 102 add whitelist [EMAIL PROTECTED] You'd then of course have a remove or similar named function to undo the add. You could also do a set or modify and so on. The first argument to add (or whatever) would say what to execute (via a map file/table, for example). This makes the implementation a bit more generic and independent of any executable. Dovecot would open a two way connection to the server application allowing the client to talk to any application that is configured and can send and receive text. The connection persists until the server end terminates or the client closes the connection. I'd rather see these as plugin type applications, but that certainly limits their adoption... With a tool like this one can write generic applications easily that would greatly expand what email clients can do interacting with the server. Sure, but why not just use a protocol for each, rather than layering them on top of IMAP? Do you really think the overhead of opening a new connection is so great? Debugging sure would be easier if they are separate... Not only can setting be changes but you could interact with server side calendars, pick up voice messages from phone systems, run any sort of groupware, all over a generic IMAP connection with this simple extension. This is more like the mode idea I mention above, IMHO. Example: 100 EXECUTE calendar 100 ok 100 list schedule today 8:00 10:00 100 8:00 make coffee 100 9:00 meeting with boss 100 9:30 Call Joe Blow at 415-555-1212 100 ok 100 quit 100 ok And when quit happens, what mode are you in? What if the task executed changed the IMAP state, how would IMAP know that? Seems like keeping track of state would be next to impossible, and would require the IMAP session to reinitialize after the execute command, which would be about the same overhead as just using another connection in the first place (a bit less, but...). In other words, unless the task executed was specified as to never change the IMAP universe (couldn't deal with SPAM filtering, password changing, sorting/deleting/adding mail/folders, etc) then when it was done executing the IMAP server would have to re-initialize to catch any changes, and this kind of defeats the purpose of the proposal IMHO. One thing I'd like to use it for is an outgoing SMTP connection to send outgoing email over IMAP. A session might look like this: Yeah, I knew that was coming. :) And of course, this very well could change the IMAP environment, and hence would require an IMAP session reset, which means why do it? Who likes this idea? Not me. I think using different protocols for things... Now, that doesn't mean dovecot couldn't support more protocols. Just as it already supports multiple protocols (pop3 and imap) it could add others... No reason not to, and not reason to piggy back them through the IMAP session, IMHO. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Email client dowloads previously read messages as new during migration.
Quoting Eric and Barbara Sammons [EMAIL PROTECTED]: I am migrating from UW to dovecot and using mbox. I did the same. I'm sure you'll be pleased by the results. I assume you are using 1.0 and not 0.99 versions? I have been able to find mbox to maildir migration information and tools; however, I have had little luck finding any tool or information for performing such a migration where I wish to keep mbox. ( I know I should migrate to MailDir, but in the case I wish to take baby steps.) There is nothing wrong with staying with mbox. There may be reasons it is best or better to do so (then again, there could be reasons to go to maildir instead -- it all depends on your needs and setup). There are few pages about the migration because there is so little to do for it. I have setup dovecot pop3 communication and I go to my email client and everything works fine with the exception that all mail I have previously read (when using UW as my POP server) is downloaded again as NEW. Is there a way to prevent this from happening without setting my client to delete mail from server? Did you set up your UID format properly in dovecot.conf? It should be: pop3_uidl_format = %08Xv%08Xu Other than that, it should pretty much work from my experience... thank you! Not sure I helped any, but you're welcome in any case. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Maillog rotates, but dovecot still writes logs into old logfile?
Quoting Mart Pirita [EMAIL PROTECTED]: log_path = /var/log/maillog info_log_path = /var/log/maillog Could be dangerous, unless all programs writing to /var/log/maillog use append mode (and it is on local disks, etc). Now when log files are rotating at 04:00 (maillog maillog.1) dovecot still keeps login into maillog.1 and not into maillog, where sendmail, etc writes logs. Yes, sounds right. The only solution I did found, is adding into /etc/logrotate.d/syslog command to restart dovecot: /var/log/maillog { postrotate /usr/bin/killall -HUP syslogd /etc/rc.d/init.d/dovecot restart /dev/null 21 endscript } Any other solution? Have dovecot log via syslog instead of to a file, and the problem goes away. -- Mart -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] 1.0.rc29 released
Quoting Dean Brooks [EMAIL PROTECTED]: I have to agree with you on this. I'm relatively new with Dovecot and have been evaluating it for deployment in a production environment. I must say that Dovecot has the most unusual development method of a large-scale project I've seen. I've seen about the same from other pre-1.0 software projects. The usual problem (and I have no idea if this describes Timo accurately or not, though it appears as such to me), is that the people (usually one person) heading the project are not experienced with large-scale software releases, and they make some simple mistakes. Often they look back years later and reflect on how they were in over their heads when they started, etc. But, let me also say, that in the above paragraph I am thinking of 3 projects, all of which I use in production, all of which I love, all of which have worked out their problems after their first major release. I would say, in general, these types of things are just growing pains and shouldn't be too surprising to most people. What would be surprising (and bad) was if they continued through the following releases. Timo has already admitted his errors on the mailing list recently, and has already sought advice on how to fix them in the future, so I would think the future is bright for dovecot. And eventually, we'll all forget the past... There have been so many show-stopping bugs in the past 10 releases that I wouldn't even consider this a candidate for a Beta release at this point, let alone a production release. Actually, that is more due to the large number of RC cuts Timo makes, rather than the number of bugs in the code, IMHO. I've found several of the releases to be very stable for me, as have others. Of course, I'm very selective as to which I install and test; I don't test each RC that comes out (in particular, since I run mbox, I don't run ones that have only maildir fixes, etc). I'm very appreciative of all the hard-work the author(s) have put into this, but I think at some point they need to take a hard-look at the way they develop and release distributions. It seems extremely sloppy and I know it's confusing to others besides myself. I believe Timo already has done so, based on his comments on the list, and his requests for help on things like versioning systems, test suites, etc. If you've been reading the list over the last couple months, I think you would recognize this. But then, I don't speak for Timo. -- Dean Brooks [EMAIL PROTECTED] -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] 1.0.rc29 released
Quoting Kenneth Porter [EMAIL PROTECTED]: That's fine for isolated users supporting only themselves. But it won't win any mind share in the boardroom. If you want widespread deployment to get proper testing (and hence a larger user base) you need a version number that gives business people the confidence to install it. Otherwise you'll be limited to avant garde hobbyists who have nothing to risk. While, is there really no one between the boardroom and the avant garde hobbyists? I didn't realize there was such a void between those levels... Once 1.0 locks down, you should see a huge expansion of users. Bug Yes, well, of course. We all know that already. fixes (not features!) in 1.0.1 will see further expansion. Any new features (like the recent addition of the wiki to the tarball) should be in the scary and experimental 1.1, not 1.0. That is simply documentation, not really a feature. And it is actually fairly normal to add and refine documentation during a RC release. I agree in general with the no more features requests, but docs are really a whole different thing. Most shops are working on the docs right up to the last minute for every release. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!
Re: [Dovecot] Version numbering
Quoting Charles Marcus [EMAIL PROTECTED]: By the way - what is 'ultruism'? At first I thought it was a typo, but you did it twice... ;) I'm very consistent with my typos. ;) Substitute the word ALTRUISIM where appropriate. -- Eric Rostetter The Department of Physics The University of Texas at Austin Go Longhorns!