sieve-extdata plugin breaks on pigeonhole 0.5 ish
Hi, I wonder if someone could give me a little help bringing the sieve-extdata plugin up to date to be usable with latest Dovecot/pigeonhole Trying to use both together at present I get an error: managesieve: Fatal: Couldn't load required plugin /usr/lib/dovecot/sieve/lib90_sieve_extdata_plugin.so: dlopen() failed: /usr/lib/dovecot/sieve/lib90_sieve_extdata_plugin.so: undefined symbol: sieve_sys_error This appears to be due to the change in error handling in pigeonhole 0.5 which appears to have removes the "sieve_sys_error()" and "sieve_sys_warning()" functions I'm unclear how this code should be change (I've commented it out for now - eek). I find this extension useful to pass in user specific parameters to my sieve scripts that are stored elsewhere in an sql database, eg spam score, size threshold for attachments, perhaps there is some other way to achieve this? Any hints on how to fixing this would be appreciated Thanks Ed W
Re: Sieve matching "size" with user variable?
On 19/03/2019 17:19, Ralph Seichter via dovecot wrote: * Ed W. via dovecot: My goal is that users can set a user configurable setting (in an external front end) and if the email size is greater than this size then we will do some processing on it. This particular filter is actually in a global sieve filter. A global script using per-user parameters? Not what I would choose. I like to generate sieve scripts for individual users (taking their wishes into account of course), because it gives me the ability to perform some sanity checks. -Ralph How would you generate scripts for some few thousand users? How would you maintain those thousands of scripts when you make changes to the template? However, even then the problem still remains. Now it's a per person script, but I want the user to have a web front end so they can say if they want (some mangling) to happen to mails over a certain size? How to read that size in the filter file and act on it? (no, I do not want my web front end to be pushing files into the backend of a cluster of mail server machines) Thanks for other thoughts (for now I passed the variable to some external script which does the check there) Cheers Ed W
Sieve matching "size" with user variable?
Hi, I am trying to create a sieve filter which does something similar to the following: if size :over ${extdata.max_size} { # do something } This doesn't seem supported in recent dovecot and size only appears to accept a literal number? I'm not sure I could extra size into a variable either (to use variable matching). My understanding of sieve filters is that one needs to use something like a match, then use something like SET to put the match into a variable? This syntax doesn't seem to be compatible with the size test here either, so I don't see that I can do this? My goal is that users can set a user configurable setting (in an external front end) and if the email size is greater than this size then we will do some processing on it. This particular filter is actually in a global sieve filter. I guess I could use an external executable program, but is there another way to do this? Thanks for ideas Ed W
Re: dovecot replication (active-active) - server specs
we've got 2 new fileservers, they have each SSD HDDs for new-storage and 7200rpm SATA HDDs on RAID 5 with 10 TB for alt-storage Friends don't let friends use Raid5... http://www.baarf.com/ (Use Raid6 or something else...) Note, a common counter argument is that someone has full backups and can survive a rebuild, so the RAID5 is really just to increase uptime. I suggest you do the sums on silent corruption and compare with your data size. Bit rot seems to be an observable problem now. Scrub your arrays regularly and where possible use data integrity checks at higher levels (not much for linux, but ZFS offers this for other OSs) Good luck Ed W
Panic/backtrace in dovecot 2.2.13
Hi I'm running into regular problems with dovecot choking on corrupted index files. The main problem is that it doesn't sort itself out and recover. This message below is repeated regularly in the log files (until I delete the index files) I *think* the trigger to get into this situation might be a files being delivered with incorrect S= values in the filename? Which is to say I am using maildrop to deliver messages and occasionally maildrop seems to write files with incorrect S= names (anyone know why or how to fix it?). The error logged regarding incorrect S= values is obviously completely different, but I speculate that it could be the earlier cause that gets the index file out of shape as shown in the problem here Thanks for any help? (note it's not easy to remove maildrop at present) Ed W Sep 1 07:32:51 mail1 dovecot: imap(...@mailasail.com): Panic: file mail-index-transaction-export.c: line 203 (log_append_ext_hdr_update): assertion failed: (u32.offset + u32.size = ext_hdr_size) Sep 1 07:32:51 mail1 dovecot: imap(...@mailasail.com): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x69a9e) [0xedee5a9e] - /usr/lib/dovecot/libdovecot.so.0(+0x69b21) [0xedee5b21] - /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xede97a69] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_transaction_export+0xa36) [0xedfff706] - /usr/lib/dovecot/libdovecot-storage.so.0(+0xa9f50) [0xedffdf50] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_transaction_commit_full+0xc4) [0xedffe454] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_transaction_commit+0x23) [0xedffe513] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_commit+0xef) [0xee0078ef] - /usr/lib/dovecot/libdovecot-storage.so.0(+0x52e05) [0xedfa6e05] - /usr/lib/dovecot/libdovecot-storage.so.0(+0x52040) [0xedfa6040] - /usr/lib/dovecot/libdovecot-storage.so.0(+0x5251a) [0xedfa651a] - /usr/lib/dovecot/libdovecot-storage.so.0(maildir_storage_sync_init+0xf4) [0xedfa68d4] - /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_sync_init+0x3b) [0xedfb786b] - /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_sync+0x3f) [0xedfb79af] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](cmd_select_full+0x187) [0x80594d7] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](cmd_select+0x17) [0x8059f37] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](command_exec+0x32) [0x805f1a2] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT]() [0x805e197] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT]() [0x805e2d9] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](client_handle_input+0x115) [0x805e515] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](client_input+0x72) [0x805e8c2] - /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x59) [0xedef8e89] - /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xee) [0xedefa05e] - /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x1c) [0xedef8f1c] - /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x48) [0xedef8fa8] - /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x2e) [0xede9d76e] - dovecot/imap [...@mailasail.com 80.189.197.148 SELECT](main+0x2bd) [0x80520ed] - /lib/libc.so.6(__libc_start_main+0xf5) [0xedce9f25] Sep 1 07:32:51 mail1 dovecot: imap(...@mailasail.com): Fatal: master: service(imap): child 31315 killed with signal 6 (core dumps disabled)
Re: Is atomic MOVING of messages between IMAP folders possible?
On 05/08/2014 17:29, Greg Sullivan wrote: That's promising that it should be doable. (yes, all I want is for the move to only occur once - duplicate messages is not a move at all) I'll forward your suggestions to the Thunderbird Postbox teams. In the meantime I'll continue to evaluate helpdesk systems and collaborative inbox products. Greg. I agree with the goal though. I have extremely simple needs for a helpdesk/CMS type system, and some plugins to Thunderbird would be quite satisfactory for my needs Really I need: - Enhanced addressbook, possibly reading via vcard from my main business system (bring in customer details and links to their orders on main system) - Ability to force breaking and rejoin of specific message threading (because customers find an old invoice and hit reply to it to send us a support request + other customers who send you 15 emails (without hitting reply to trigger threading) to describe a single problem). Note I believe this requires rewriting the message, so it couldn't be atomic with current IMAP? - Enhanced use of flags to mark whether thread needs further input or is closed Nice to have would be: - Logging these state changes to somewhere else so that you can get statistics (can probably be done by polling the state of the IMAP server though?) - Atomic locking of threads so that we don't get two people answer something. Could be handled through use of flags perhaps? Thunderbird is helpful in that in theory all one needs to do is write the above in javascript and drop appropriate display buttons on the email inbox, so even if some external lock manager is needed to arbitrate access, then this is no great problem. In practice I lack the time to work on this, but I'm vaguely interested to find out if there is a way to hire plugin developers for Thunderbird? Good luck Ed W
Re: [Dovecot] Architecture for large Dovecot cluster (employ an expert!)
Hi and some other Dovecot mailing list threads but I am not sure how many users such a setup will handle. I have a concern about the I/O performance of NFS in the suggested architecture above. One possible option available to us is to split up the mailboxes over multiple clusters with subsets of domains. Is there anyone out there currently running this many users on a Dovecot based mail cluster? Some suggestions or advice on the best way to go would be greatly appreciated. look about list archive for equal setups , ask Timo or other people for paid support, wait for people reporting their big setups It's difficult for me (on the outside) to gauge how many people do pay Timo, et al for services. However, just to put a stake in the ground, I have employed Timo on a couple of occasions, just for small projects, but in my case to add new features or fix bugs which are specific to my requirements. I will very positively recommend this, I found Timo extremely helpful and although I only paid an affordable amount to have a feature added, he has kindly continued to maintain these features as part of the core software (for which I am extremely grateful) I'm very satisfied and have to highly recommend Timo. His prices were extremely reasonable and he offered service excellent. This is obviously a glowing endorsement, take that as you wish. However, I suspect that sometimes we are all guilty of forgetting that there are humans on the far side of these projects and for relatively affordable sums we can employ them to both help us out (and possibly benefit all users of the software). I don't have big pockets, but I have successfully asked for enhancements to several open source projects (dovecot/dnsmasq/shorewall/squid and some others) and the whole experience has worked very well for me. Please feel encouraged to employ Timo if you use Dovecot! Good luck Ed W
Re: [Dovecot] IMAP ANNOTATE Extension RFC5257
Hi And FWIW, that RFC is classified as Experimental. Hasn't been a bunch of momentum behind it, at least in terms of adoption/implementations. Mailbox metadata seems to be the more interesting development at this time (RFC 5464). michael Yes, I know, but for groupware collaboration on mails, it is an useful feature. Especially in companies, where an extreme group based workflow is used. Is this the extension necessary to make Kolab work correctly? I would be interested to see further implementation on that? I think Kolab has the most legs at the moment for me to use to extend our services with extra groupware features (I think I would prefer to implement filesystem based storage of DAV files, but apart from that it looks good and seems to be heading in the right direction) Anyone want to pitch in fund development in this area? Cheers Ed W
Re: [Dovecot] Crash in dovecot 2.2.6
On 02/11/2013 11:18, Timo Sirainen wrote: On 29.10.2013, at 10.26, Ed W li...@wildgooses.com wrote: Hi, I recently upgraded from a dovecot 2.1 version to 2.2.6. I now have a single user who occasionally triggers a crash (just this one user it seems?). The user connects via LiveMail (v14.0.8117.) and IMAP. Oct 29 08:05:26 mail1 dovecot: imap(custo...@example.org): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x6575a) [0xd94cc75a] - /usr/lib/dovecot/libdovecot.so.0(+0x657cb) [0xd94cc7cb] - /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xd9481991] - /usr/lib/dovecot/libdovecot There was an assert error message logged before this raw backtrace. What is it? I'm sorry, yes of course there is. Sorry, tunnel vision: Samples: Nov 5 06:08:43 mail1 dovecot: imap(u...@example.com): Panic: file mail-index-transaction-export.c: line 203 (log_append_ext_hdr_update): assertion failed: (u32.offset + u32.size = ext_hdr_size) Nov 5 06:13:21 mail1 dovecot: imap(u...@example.com): Panic: file mail-index-transaction-export.c: line 203 (log_append_ext_hdr_update): assertion failed: (u32.offset + u32.size = ext_hdr_size) Nov 5 07:50:59 mail1 dovecot: imap(u...@example.com): Panic: file mail-index-transaction-export.c: line 203 (log_append_ext_hdr_update): assertion failed: (u32.offset + u32.size = ext_hdr_size) Nov 5 07:55:23 mail1 dovecot: imap(u...@example.com): Panic: file mail-index-transaction-export.c: line 203 (log_append_ext_hdr_update): assertion failed: (u32.offset + u32.size = ext_hdr_size) Thanks Ed W
Re: [Dovecot] Transparent Migration from cyrus to dovecot
Make use of the proxy feature. You can add a server entry into your userdb, that way you can literally move users over one by one and flip their server location. You can easily test individual users and move them over individually. Works brilliantly Ed W On 06/10/2013 11:39, Jogi Hofmüller wrote: Hi dovecot people, We are in the process of preparing the migration from a cyrus 2.1 installation to dovecot. Dovecot will be installed on new hardware, so we have separated servers that can/will exist in parallel for a while. Our goal is to do the migration without interrupting the service for our users too much. Currently we tend to using dsync. So I am asking for best practice suggestions, tips and hints from people who have done such a thing before. Curiously awaiting your replies ;) Cheers! PS: I am subscribed to the list. So no need to include my address in replies. Thanks!
Re: [Dovecot] Dovecot Dsync
On 27/08/2013 09:54, Ben wrote: On 23/08/2013 13:08, Ed W wrote: Hi I'm on an Ubuntu LTS release so the dovecot came from their release. I'd prefer to stay that way unless I really have to... Everyone is entitled to their own opinions, but IMHO this kind of attitude is a huge detriment to most software projects. I see very little reason to take such policies personally... 1) I use virtualisation (especially lightweight virtualisation such as vservers) so that each service is in its own container. Now if I have no interest in some container and want to let it rot (ie as per LTS), then I can just do so. 2) I use a fast moving rolling distro (gentoo in my case, Arch is probably a good choice also) so that I have the option to stay up to date when I want to The end result is you can be as up to date as you want, or let things rot, as you please. Unfortunately if you want to use a very old bit of software, then you also get to keep all it's bugs... Sorry. Good luck! Hope this inspires you to try a different route! Ed W Whilst to some degree I appreciate where you're coming from and agree with you to a certain extent, I would caution that following the bleeding edge, always running the latest versions is not without risk or bugs either ! OK, but virtualisation also helps you mitigate this: - I setup my containers so that I have at least two mount points, one for the operating system and any data broken out into it's own mount. - This makes it quite simple to duplicate the container and spin up a test version pointing if required at the live data - Now you can run a test upgrade on the test container. If it works either swap them around or upgrade the original Additionally: - My choice of distro (gentoo) makes it fairly simple to build binary packages of the software I'm using. - I then use these binary packages on all my containers, additionally with guided profiles which control which packages and which options we deploy. - It's fairly simple to roll back most packages to the previous binary version if a problem is detected (logging of package changes is built-in) So it's quite low risk to use such a rolling distro in general. Note, I can't speak for other distros, but gentoo stable is fairly conservative and shouldn't be a problem for an experienced admin to keep up to date. It has the option to unmask bleeding edge packages where necessary and this can be useful to hit specific version numbers of software. It's also pretty trivial to keep a private repo of customised packages (ebuilds) with either personal patches or to pin certain versions of software. (So for example if you run, say, Dovecot with a few custom patches, then it's fairly trivial to drop these patches in a directory and now you can use the package manager to follow stable builds, but your custom patches will be rolled in for you with each update - can be very handy for some requirements) I don't have the same experience with RPM/DEB so I can't say that all the same is easy to do, but the key thing is the use of containers/virtualisation to assist with testing and upgrades. Even worst case you have to do a whole OS upgrade, at least if you can do that in a test container while the live remains running, is a big advantage Good luck Ed W
Re: [Dovecot] Dovecot Dsync
Hi I'm on an Ubuntu LTS release so the dovecot came from their release. I'd prefer to stay that way unless I really have to... Everyone is entitled to their own opinions, but IMHO this kind of attitude is a huge detriment to most software projects. I see very little reason to take such policies personally... 1) I use virtualisation (especially lightweight virtualisation such as vservers) so that each service is in its own container. Now if I have no interest in some container and want to let it rot (ie as per LTS), then I can just do so. 2) I use a fast moving rolling distro (gentoo in my case, Arch is probably a good choice also) so that I have the option to stay up to date when I want to The end result is you can be as up to date as you want, or let things rot, as you please. Unfortunately if you want to use a very old bit of software, then you also get to keep all it's bugs... Sorry. Good luck! Hope this inspires you to try a different route! Ed W
Re: [Dovecot] script to test CATENATE
On 22/07/2013 23:17, Mike Abbott wrote: Attached please find a perl script which tests the CATENATE support in dovecot. I used this to test my CATENATE implementation a few years ago and it runs fine against dovecot in OS X Server. Hi Mike Do you think you might re-submit the matching BURL support to Postfix? It seems like it accidentally fell on the floor due to arriving at a bad moment some years back? Cheers Ed W
Re: [Dovecot] script to test CATENATE
On 23/07/2013 14:30, Mike Abbott wrote: Do you think you might re-submit the matching BURL support to Postfix? I don't think re-submitting is a good idea unless Wietse co. request it, which I doubt will happen. My reading of it at the time was something like There are no clients that support this. We don't understand the need Now that there is at least one large client (I'm presuming that IOS does support it?) I think the world has changed and of course the patch has now had large scale testing (since I'm presuming again that it's included in the Apple distributed Postfix version?) I personally think that the idea is perfect and I would like to see it break into mainstream use and from there I think we will possibly see support added to additional clients (I think this is how to break the chicken/egg cycle). The idea that you can use IMAP commands to construct a message server side from bits of other messages, and then post it out server side is fantastic. Please consider having at least one more go. I think there is likely to be much better reception now that clients exist, the patch is well tested and Dovecot at least supports the IMAP side. Please...? Cheers Ed W
Re: [Dovecot] OT: SAN vs Flash only SAN-less VM architecture for data storage
On 21/07/2013 10:13, Stan Hoeppner wrote: On 7/20/2013 9:20 AM, Charles Marcus wrote: It sounds great, a real win-win as to cost *and* performance... Until you read the article carefully and note the network requirement: Data is synchronously written to another host with a PCIe SSD for data protection and high availability via a simple, private *10GbE* network. I have no opinion on the subject, but for others who haven't read the article the 10gbe referred to is to keep the server in sync with a backup server somewhere presumed in the same room. As such 10gbe seems reasonable and inexpensive (newer supermicro can come with them built-in and standalone cards are reasonably inexpensive. The new Netgear 10gbe switch is even quite affordable) As near as I can tell they advocate putting all the storage on the host machine and using network to sync off the machine (vs SAN where all the storage is off machine) I don't really get where they are going with this solution though? Ed W
Re: [Dovecot] Proxying, pertinent values and features, SNI
On 04/04/2013 03:56, Christian Balzer wrote: 2. Despite the fact that it will be trivial for anybody to determine that OEM A is now hosted with us, a SAN SSL makes all the SANs visible in one go, something they probably don't want. But someone smart enough to be able to look at a certificate, is probably also smart enough to be able to go to http://robtex.com and do some reverse IP tests on your IPs... I think the difference is minor - even if you used a whole bunch of IPs, one per customer, if they are near each other, then a few google searches and some use of robtex will quickly show up your customer base Cheers Ed W
Re: [Dovecot] Proxying, pertinent values and features, SNI
Hi I presume to best support all(?) clients out there is to have local_name sections for SNI first and then local sections for IP address based certs. It is my understanding that SNI needs to be requested by the client, so aside from client bugs (nah, those don't exist ^o^) every client should get an appropriate response for TLS. Has anybody done a setup like that already? Although not what you asked for, just so you are aware, Godaddy (boo hiss, etc) offer reasonably inexpensive multi subject alt name based certs. This means you can have a single cert which is valid for lots of completely different domain names. The mild benefit is that this doesn't require SNI support for SSL (which I'm unsure is supported by many mail clients?) Although it's more expensive, I think it's a good solution (I'm using it for a small 5 domain installation) Good luck Ed W
Re: [Dovecot] Please help to make decision
I believe a variation on that theme is also to double each machine using DRBD so that machines are arranged in pairs. One can fail and the other will take over the load. ie each pair of machines mirrors the storage for the other. With this arrangement only warm failover is usually required and hence DRBD can run in async mode and performance impact is low Note I don't use any of the above, it was a setup described by Timo some years back Good luck Ed W On 25/03/2013 18:47, Thierry de Montaudry wrote: Hi Tigran, Managing a mail system for 1M odd users, we did run for a few years on some high range SAN system (NetApp, then EMC), but were not happy with the performance, whatever double head, fibre, and so on, it just couldn't handle the IOs. I must just say that at this time, we were not using dovecot. Then we moved to a completely different structure: 24 storage machines (plain CentOS as NFS servers), 7 frontend (webmail through IMAP + POP3 server) and 5 MXs, and all front end machines running dovecot. That was a major change in the system performances, but not happy yet with the 50T total storage we had. Having huge traffic between front end machine and storage, and at this time, I was not sure the switches were handling the load properly. Not talking about the load on the front end machine which some times needed a hard reboot to recover from NFS timeouts. Even after trying some heavy optimizations all around, and particularly on NFS. Then we did look at the Dovecot director, but not sure how it would handle 1M users, we moved to the proxy solution: we are now running dovecot on the 24 storage machines, our webmail system connecting with IMAP to the final storage machine, as well as the MXs with LMTP, we only use dovecot proxy for the POP3 access on the 7 front end machines. And I must say, what a change. Since then the system is running smoothly, no more worries about NFS timeouts and the loadavg on all machine is down to almost nothing, as well as the internal traffic on the switches and our stress. And most important, the feed back from our users told us that we did the right thing. Only trouble: now and then we have to move users around, as if a machine gets full, the only solution is to move data to one that has more space. But this is achieved easily with the dsync tool. This is just my experience, it might not be the best, but with the (limited) budget we had, we finally came up with a solutions that can handle the load and got us away from SAN systems which could never handle the IOs for mail access. Just for the sake of it, our storage machines only have each 4 x 1T SATA drives in RAID 10, and 16G of mem, which I've been told would never do the job, but it just works. Thanks Timo. Hoping this will help in your decision, Regards, Thierry On 24 Mar 2013, at 18:12, Tigran Petrosyan tpetr...@gmail.com wrote: Hi We are going to implement the Dovecot for 1 million users. We are going to use more than 100T storage space. Now we examine 2 solutions NFS or GFS2 via (Fibre Channel storage). Can someone help to make decision? What kind of storage solution we can use to achieve good performance and scalability.
Re: [Dovecot] Dovecot 2.2 LEMONADE extensions
On 26/03/2013 13:39, Jan Phillip Greimann wrote: Hi there, I read an article about dovecot 2.2, which includes the LEMONADE extensions, and was fascinated about the feature Forward without download. We have a small internet-uplink in our office and our CEO loves to receive mails with large attachments, he also reply/forward it, so every time the full attachments get downloaded and uploaded again. Now the question is: Is LEMONADE supported by desktop-mailclients like thunderbird or just mobile Clients? (Google said nothing to this, maybe searched the wrong words) I hope someone can help me with this. Which client are you using? My understanding is that you will need an SMTP server which supports such a feature. Apple patch Postfix to support this using the BURL extension, however, for whatever reason the patch has not been picked up by Postfix: http://www.opensource.apple.com/source/postfix/postfix-229/patches/burl.patch I think it would be worth rattling the postfix list to see if it could be reviewed Note, my favourite solution would be a new RFC which triggers Dovecot to pass to SMTP a specified message from a specified folder. This would then mean you can use all the IMAP features to compose your message on the server, probably bypassing lots of downloading. Further it would mean no duplicated data when moving the message to the Sent Folder since such an operation would be all done and tracked via IMAP. So you would compose the message in Outbox, ask Dovecot to send it, then (possibly atomically) move it to Sent folder. However, a) there is no such RFC and b) there is no client mailer which supports it. I think Apple might be the people to rattle to get such an idea off the ground though - they seem to have the desire to make it happen (add in K9 developers and submit a patch to Mozilla and at least there would be basic groundwork...) Cheers Ed W
Re: [Dovecot] Dovecot 2.2 LEMONADE extensions
On 28/03/2013 22:10, Timo Sirainen wrote: On 28.3.2013, at 22.44, Ed W li...@wildgooses.com wrote: My understanding is that you will need an SMTP server which supports such a feature. Apple patch Postfix to support this using the BURL extension, however, for whatever reason the patch has not been picked up by Postfix: http://www.opensource.apple.com/source/postfix/postfix-229/patches/burl.patch I think it would be worth rattling the postfix list to see if it could be reviewed Wietse mentioned a few months ago he's looking into it. I don't know what happend since. Also there's a good chance that Dovecot v2.3 will have an SMTP submission server with BURL support (that will simply forward the mail to a real SMTP server). If you know any Apple devs, then consider running some kind of submit message xx to SMTP extension past them? It seems to be a far better solution than BURL and all the other workarounds and completely solves the Sent Items duplicate transmission, etc Cheers Ed W
Re: [Dovecot] Dovecot with sasl/imaps/postfix and thunderbird
On 18/03/2013 03:10, Alex wrote: https://www.rapidsslonline.com/ less than $20/year, takes literally 15 minutes from start to having a certificate. Well, maybe 30 minutes the first time when you need to read everything. There are probably dozens of other sites offering similar services; I've used this one several times. Namecheap reseller: $5/year https://www.cheapssls.com/ I ended up buying one from rapidsslonline, after I learned they require authorization from only the subdomain, not the top-level. I'll check out cheapssls.com as well. I'm not quite sure yet, but it seems these are only supported with the most current browsers? If a customer visits with, say, IE8 or IE6, are they going to have an issue? (not that they ever should be, or that it would probably affect my purchasing choice; I was just curious because I'm seeing some old browsers and fielding some support issues now.) It's not clear if you mean cheapssls.com by the above? However, I just tried Win XP 32bit with IE8 on one of my certs from cheapssls and saw no problems... Cheapssl appears to be a reseller for the cheapo positive positivessl and rapidssl certs. There is a couple of $ difference in price between the two cert types. The other cheap end cert seller is godaddy who also offer extremely cheap certs, and in particular they are the only sensibly priced offering that I'm aware of for certs with multiple domains on them (alternative SAN certs), ie for moderate money they will give you a cert for domain abcd.com *and* defg.com on the same cert - this can be useful for mail/web servers which need to answer to multiple domain names (not just wildcard). Of course there is an amount of backlash against godaddy, so choose your politics Oh, I did also manage to get through the bureaucracy of startcom.org and of course if you are happy with their quirky infrastructure then they offer very inexpensive certs, especially for the more unusual types such as wildcards and multiple SAN. I haven't yet taken a cert from them, but it seems workable now that I got my account created. Good luck Ed W
Re: [Dovecot] Dovecot with sasl/imaps/postfix and thunderbird
On 14/03/2013 03:36, Noel wrote: https://www.rapidsslonline.com/ less than $20/year, takes literally 15 minutes from start to having a certificate. Well, maybe 30 minutes the first time when you need to read everything. There are probably dozens of other sites offering similar services; I've used this one several times. Namecheap reseller: $5/year https://www.cheapssls.com/ (I just buy 5 year SSLs at that price... How can you refuse?)
Re: [Dovecot] Best practice for sieve script synchronization
On 28/02/2013 09:54, Michael Grimm wrote: On 2013-02-28 9:58, Oli Schacher wrote: I was wondering how people handle sieve script synchronization in such setups. We came up with a few options for syncing: 1) rsync/unison ~/sieve every x seconds [...] We are tending towards 1) as this seems simplest and most robust solution but before we re-invent the wheel we'd like to hear your thoughts... I am using unison for synchronizing sieve scripts for some years, now. It does what it is supposed to do very well. There are systems like lsyncd which can watch files for changes and call rsync/unison when they change. This gives you near instant sync, but low overhead. WOuld that help? Ed W
Re: [Dovecot] Support for PolarSSL?
On 28/02/2013 14:17, Timo Sirainen wrote: On 27.2.2013, at 23.15, Charles Marcus cmar...@media-brokers.com wrote: Just curious if you ever thought about supporting other than just OpenSSL? PolarSSL looks really interesting, has no major dependencies and is very lightweight compared to OpenSSL, GNUTLS or others... https://polarssl.org/ I guess it could be a lot of work, or not, anyway, I'm just curious… I initially tried to support both OpenSSL and GNUTLS, and it was a lot of work. I'm not really looking forward to that again :) But I guess after v2.3 the Dovecot's lib-ssl-iostream API might become stable enough that other backends could be implemented just once without having to keep changing them.. I believe the high profile user of polarssl is the Dutch government who have approved OpenVPN + PolarSSL for use. (The point being that openssl is just too huge to audit for security) Ed W
Re: [Dovecot] Migration from v1 to v2 with hashed directory structure
On 28/02/2013 13:59, Pavel Dimow wrote: Hi, I want to upgrade to version 2 but I would like to solve a long standing problem with 'flat' directory structure ie we have /var/spool/vmail/mydomain.com/u...@mydomain.com and I want a new server with version 2 to have hashed directory structure like /var/spool/vmail/mydomain.com/u/s/user I was wondering it f there is some better solution then dir hashing or a way to hash a dir other then first two letters. Also any suggestion how to perform this migration from old to new server with hashing on the fly? My thought would be that unless you have millions of users, such a rename process will take only seconds to minutes? Why not just take the server down for a couple of minutes to do the rename process? If you wanted to be really clever, you could do it live using symlinks to move the dirs, then update the dovecot config? Ed W
Re: [Dovecot] Dovecot SASL Client support?
Hi At the moment Dovecot does not implement an SMTP/LMTP client. This might change, when Timo decides to implement all of the LEMONADE feature, which at some point require the IMAP server to edit and send messages on behalf of a (mobile) client. Timo will shed more light on his plans. Are you thinking about burl smtp? Someone from Apple implemented this for postfix some years back, but it doesn't seem to have made it into mainline (I think through oversight and Apple not pushing a second time though...) It will need client support, but my design would be something like an IMAP extension which works something like SMTP *this* specific message using these login details and these sender/recipient details. That way the mail client can completely generate the mail using any IMAP tricks at it's disposal to minimise traffic, once the mail is generated and in some location, eg Sent, Drafts or INBOX as per your preference, then finally instructing the server to push it into the normal SMTP system (for bonus marks it could forward the clients IP using X-FORWARDED-FOR so that the SMTP can make decisions based on IP address). This design gives you all the benefits of keeping the SMTP system, minimises traffic, allows for storing Sent Items or not as per your preference and avoids the use of magic folders. Now all we need is client support... Note there is a feature of Courier which does something similar, but it uses magic folders (ideally we want to be able to smtp any message in any folder in order that we can easily implement our preferred storage policies) http://www.courier-mta.org/imap/INSTALL.html#imapsend Ed W
Re: [Dovecot] v2.1 memory usage
On 12/11/2012 04:13, Daniel L. Miller wrote: The tiny bit of Googling I've done tells me GnuTLS seems to be a more standards-compliant implementation, and MAY be safer than OpenSSL. However, as OpenSSL is the de-facto standard used by most Linux programs, acceptance of GnuTLS is quite limited. I've been intrigued by what I've read about it, and took a quick look at enabling support in Dovecot for GnuTLS directly - but while it didn't seem overly heavy at first glance the fact that Timo doesn't want to do it tells me I'm underestimating the complexity. Openssl is a *massive* project and I'm unsure that gnutls is much smaller... We should assume that both are quite scary from a security point of view. Licensing is the main thing which divides them, gnutls is stated as GPL compatible (however, the nominal incompatibility of openssl seems difficult to understand?) OpenVPN integrated with PolarSSL and got Dutch government official approval for the combined package. I think elsewhere it's stated that openssl would not have been approved because something like the codebase was too large to inspect and sign off http://polarssl.org/news?item=0132 I haven't worked with PolarSSL, so no idea, but it's massively smaller codebase is likely attractive if you are the kind of person who actually *does* security audits on the software you run in secure situations. Openssl is just a complete swiss army knife of tools! Ed W
Re: [Dovecot] v2.1 memory usage
On 05/11/2012 23:22, Timo Sirainen wrote: On Mon, 2012-11-05 at 23:40 +0200, Timo Sirainen wrote: Anyway, looks like Dovecot can't link OpenSSL to imap/pop3 processes without wasting a ton of memory. In v2.2 I already moved imapc/pop3c backend code to plugins to avoid this. Looks like similar ugliness is needed for other features/backends also that may end up using SSL code. (We were wondering with Stephan what to do about his new HTTP library code that added support for SSL. It would be nice to keep it in the core libdovecot.so, but not if it links with SSL. So looks like we'll need some kind of a http-ssl plugin that is loaded only when needed.) Implemented it a bit easier way that also gets rid of imapc/pop3c plugins and simplifies other things: lib-ssl-iostream now loads OpenSSL dynamically: http://hg.dovecot.org/dovecot-2.2/rev/68d21f872fd7 This also provides a nice abstraction to OpenSSL, making it again possible to implement other backends like GnuTLS or NSS. (Except login process code doesn't use lib-ssl-iostream yet.) Does libtomcrypt implement enough? Ed
Re: [Dovecot] horde sync status ?
On 05/10/2012 15:56, Robert Schetterer wrote: Am 05.10.2012 14:00, schrieb Spyros Tsiolis: In other words, can a user owning a smartphone get his/her e-mails on it apart from the webpage ? horde 5 acts as active-sync server for mail , calendar, adressbook ,tasks ,notes syncml with funambol app on the smartphone side for calendar, adressbook ,tasks ,notes roadmap 5.1 is planned as card/caldav server http://wiki.horde.org/ActiveSync Also see Sogo (and owncloud). Plus the Sogosync connector This is a developing area (at last) Ed W
Re: [Dovecot] 76Gb to 146Gb
This is one of those questions which is almost too easy if you are familiar with Linux. Trying not to sound like a d*ck, but is it an option to rent someone to help with admin jobs? For example, were it me then I would probably have setup some partitioning scheme with separate partitions for data and operating system? Possibly also using LVM? You have several options, mainly the choice of filesystem will dictate here, but quite possibly you can: 1) Pull the drives one by one and rebuild the raid after each. Keep the old drives since you can technically roll back onto them. Expand the partitions (scary without LVM) and then expand the filesystem on the partitions 2) Boot from a DVD/Flash on your favourite rescue distro (I like sysrecuecd). Create the new raid, copy the old to the new, remove the old drives, reboot from new. Possibly taking the time to repartition and move some data around while you do it (remember to update fstab) Both are fairly simple if you have done it once, but it would be well worth finding someone either local or who will log in via remote control and support you? Final thought: For the size of drives you are looking at, SSD drives are relatively inexpensive and likely comparable with the high end drives you are probably looking to buy? For 40 users I would hazard a guess you likely would be happy with inexpensive low end drives, but certainly a couple of small SSDs will blow away a spinning disk and give you a decent upgrade... Good luck Ed W On 24/09/2012 18:42, Spyros Tsiolis wrote: Hello all, I have a DL360 G4 1U server that does a wonderfull job with dovecot horde, Xmail and OpenLDAP for a company and serving about 40 acouunts. The machine is wonderful. I am very happy with it. However, I am running out of disk space. It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity has reached 82%. I am starting of getting nervous. Does anyone know of a painless way to migrate the entire contents directly to another pair of 146Gb SCSI RAID1 disks ? I thought of downtime and using clonezilla, but my last experience with it was questionable. I remember having problems declaring disk re-sizing from the smaller capacity drives to the larger ones. CentOS 5.5 Manual install of : Mysql XMail (pop3/smtp) ASSP (anti spam) Apache / LAMP and last but by no means list : Dovecot Dovecot -n : # 1.2.16: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3 base_dir: /var/run/dovecot/ log_path: /var/log/dovecot/dovecot.log info_log_path: /var/log/dovecot/dovecot-info.log ssl_parameters_regenerate: 48 verbose_ssl: yes login_dir: /var/run/dovecot//login login_executable: /usr/local/dovecot/libexec/dovecot/imap-login login_greeting: * Dovecot ready * login_max_processes_count: 96 mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir mail_plugins: zlib auth default: verbose: yes debug: yes debug_passwords: yes passdb: driver: passwd-file args: /etc/dovecot/passwd passdb: driver: pam userdb: driver: static args: uid=vmail gid=vmail home=/home/vmail/%u userdb: driver: passwd Any help would be appreciated or any ideas you might have. Regards, spyros I merely function as a channel that filters music through the chaos of noise - Vangelis
Re: [Dovecot] 76Gb to 146Gb
On 24/09/2012 19:07, Ed W wrote: This is one of those questions which is almost too easy if you are familiar with Linux. Trying not to sound like a d*ck, but is it an option to rent someone to help with admin jobs? For example, were it me then I would probably have setup some partitioning scheme with separate partitions for data and operating system? Possibly also using LVM? That came out wrong... What I meant to say was something more like if you were to employ someone locally they would probably give you a whole bunch of ideas on how you could adjust the setup of the server to be more future proof. It would be worth working with someone just to get that right. For example, here are some ideas that occur to me that you could use ... Sorry, should re-read my words before hitting send Ed
Re: [Dovecot] Anyone else seeing lots of random duplicate messages???
On 05/09/2012 11:58, Charles Marcus wrote: I know, it is on my ToDo list... we only just recently migrated this server to Dovecot, and I've had my plate full with other issues, which are now mostly resolved, so I'm about ready to circle back and finish up (installing SOGo, enabling sieve, etc), I have recently noticed owncloud (even has an ebuild for it). Have you re-evaluated roundcube+owncloud vs SOGo for a dav calender/contacts solution? Ed
Re: [Dovecot] Trouble implementing Antispam plug-in for Dovecot
On 06/09/2012 18:56, Ben Johnson wrote: On 9/6/2012 6:10 AM, Charles Marcus wrote: On 2012-09-05 6:20 PM, Ben Johnson b...@indietorrent.org wrote: My configuration is Dovecot (1.2.9) + Sieve + SpamAssassin on Ubuntu 10.04. 1.2.9 is really old... you really need to upgrade to a recent/stable version. Thanks, Charles. I do see your point. One of the challenges we face in this regard is that we're using a Long-Term-Support version of Ubuntu (10.04) and 1.2.9 is the latest package in the OS's repository. That said, we could upgrade manually, but this is a production server on which downtime must be minimized, and we all know how unexpected issues arise during installation (even when the procedure is tested in a closely equivalent development environment). I personally use (lightweight) virtualisation on any new machine, I really don't see any reason why NOT to. I would typically also setup my mounts such that the operating system is separate from the data. This makes it easy to upgrade the OS/services, but without touching the data (test before/after on the same data for example) So in my situation I would boot a fairly small (gentoo in my case) virtual environment that runs only dovecot + postfix, it mounts the mail spools separately - I say boot, but because I'm using linux-vservers, it's really a fancy chroot, and so the instance will start in 2-3 seconds (restarts are similarly near instant). I would upgrade by cloning this installation, upgrading it, testing it to bits, and then to make it live basically you swap this machine for the live machine. There are various ways it could be made near seamless, but in my situation I can bear a couple of seconds whilst I literally restart the machine Similarly I segregate all my services into a dozen or so virtual machines, so DNS has it's own machine and so does logging, databases, almost every webservice gets its own virtual environment, etc. You could use a full blown vmware/kvm/etc if that floats your boat better, but the point remains it's so trivial to install, makes upgrades to trivial and massively decreases your downtime risk that it's very hard to find a reason NOT to do it... I haven't tried too hard to keep my instances tiny, so each is probably around 400-600MB in my case. However, if it were important this could easily be reduced to 10-100s MB each using various hardlink features. As you can see it's easy to snapshot a whole machine to manage upgrades/backups, etc This is more about infrastructure, but I honestly can't get over how many people are sitting on their hands shackled by I'm on Debian xxx and I can't install any software newer than 5 years old... It's so easy to escape from that trap...!! Good luck Ed W
Re: [Dovecot] v2.2 status update: IMAP NOTIFY extension and more
On 16/08/2012 08:02, Cor Bosman wrote: I'm also considering implementing an SMTP submission server, which works only as a proxy to the real SMTP server. The benefits of it would mainly be: What would be really cool is if you also kept statistics on certain metrics, like how many emails a specific sender has sent. If this is done right, it could become a centralised spam sender back-off system over multiple smtp servers. Maybe something for the future. We now pay for a commercial system that implements this. Postfix allows you to write policy agents very simply. I wrote a small perl utility which uses a database to count the number of emails a user has sent in the last 1 and 24 hours. Based on that we throttle users (I have some fudging for recipients per email also). If you like the idea then it's about 10 lines of perl (+ a decent chunk of boiler plate). Ed
Re: [Dovecot] v2.2 status update: IMAP NOTIFY extension and more
On 16/08/2012 12:24, Charles Marcus wrote: On 2012-08-16 7:12 AM, Ed W li...@wildgooses.com wrote: My opinion is that this is very easily to implement in at least Postfix and probably other servers, hence I would suggest this is a function for the MTA, not for the Dovecot relay? Well, true enough for simpler installations, but integrating something like this in dovecot that can be applied across large dovecot director based farms might be a good thing. Actually, maybe (and maybe not, I honestly haven't thought this through at all, and this might be a really dumb idea), instead of specific support for this one feature, I wonder if it would make more sense to actually build in support for a policy server (ie, amavisd-new) like postfix has... I'm really missing the key point here? The proposal was (I think?): Have Dovecot accept emails and feed them to the MTA (eg Postfix) This means you have access to all the MTA features (hence why I was pointing out these features exist today). Further, by centralising this function you don't need to duplicate the functionality depending on whether an email was sent via Dovecot or SMTP. So I don't see that policy servers are necessary in Dovecot for this particular requirement - I think most ideas we can come up with would benefit from delegating policy to the MTA so that it's centralised? I might be missing the point, so see above for my understanding of the problem? Ed W
Re: [Dovecot] v2.2 status update: IMAP NOTIFY extension and more
On 13/08/2012 19:27, Patrick Ben Koetter wrote: * Timo Sirainen dovecot@dovecot.org: I'm also considering implementing an SMTP submission server, which works only as a proxy to the real SMTP server. The benefits of it would mainly be: * It could support BURL command and other extensions required by LEMONADE. The real SMTP server would see only regular DATA commands. * Would make SMTP AUTH easy to implement regardless of what the real SMTP server is. Nice move! Especially since I recall Wietse being not very inclined to implement anything alike. Annoyingly Apple implemented burl for postfix, submitted the patch, but Wietse declined it (for reasons I would need to remind myself of - I think some implementation concerns, but mainly who is using it, lets see the clients first) I would also be very interested to see BURL support appear. It appears to offer bandwidth reductions (my customers are all on slow dialup links), and at least some apple clients (IOS?) support it Cheers Ed W
Re: [Dovecot] IMAP IDLE - iPhone?
On 10/08/2012 10:25, Timo Sirainen wrote: how does help me save battery if i have a folder-structure maintained by sieve if i do not get my new mails? If you open 10 connections to IMAP server and will IDLE on them - your phone will wake up to reply for ping in every of that 10 connections. Imagine if there will 100 folders? Like mentioned previously in this thread, you can disable the pings in Dovecot. And even when they happen Dovecot makes them happen at the same time. So I think the power usage difference between 1 connection and 100 connections isn't much. The battery consumption problem seems common, but understanding of it is poor... The situation is simply: - Waking up a 3G radio is expensive on power - So prefer to do it less frequently and do a chunk of stuff, rather than doing a small amount of data quite frequently - Every 30 mins is only 48 times a day. Every 15 seconds is massively more - Different 3G networks have different parameters set which will dramatically affect battery life. ie they wait longer/shorter before allowing the radio to go idle once woken up. I don't know a good online resource to see these settings, my old Nokia had a utility to investigate things... - Firewalls impose challenges on being silent for 30 mins at a time and may drop any NAT mappings - The 3G network will almost certainly have a NAT in the way which guarantees you have a (probably very short) NAT timeout (perhaps 10 mins or perhaps less) - Then there is tcp keepalive. Does Dovecot enable these? (Sorry, I should look in the code...). However, applications which enable it (eg optional in SSH) will trigger a default (I think) 75 second network packet As Timo says, Dovecot tries to be clever and coalesce packets from checking multiple folders, but from memory there are limitations on this if you have multiple *accounts*? I think the hash is per email address and per IP ? But of course if your emails turn up every few seconds, then you will be triggering wakeups every few seconds also. I think if you tune things with that in mind, it's very possible to get very low battery usage. Using tcpdump on your mobile client to help tune things is a great help. Basically every stray packet is a killer for battery, hunt them down. Cheers Ed W
Re: [Dovecot] Just trying to make dovecot work.
On 06/08/2012 02:35, Peter Snow wrote: Well you can continue to kid yourselves that the documentation is good if you like. The facts say differently. For example, I visit http://wiki.dovecot.org/MainConfig for help with the main config and at the top of the page it tells me that this page is for version 1.x, so I click the link to view the page for 2.x, which takes me to a page saying that the page I want has yet to be created. I therefore have no option but to refer to the version 1.x documentation. I copy mechanisms = plain from it Google is *such* a useful tool.. http://lmgtfy.com/?q=dovecot+%22mechanisms+plain%22 but when I restart dovecot, it fails, telling me that it is not recognized! No, probably it says something different. Please quote the error message, not your interpretation of the error message? I noticed that also and did indeed follow many of them. Many of them though are for version 1.x but don't say so. There are many useful differences between 1.x and 2.x, but its a gradual evolution, not a big change. As such the configuration changed a large amount between 1.x and 2.x, in that it's now stored in multiple files rather than a few big files, but for your concern such a change is relatively minor and the configuration options are largely the same. By the way, I've now got it running. It wasn't failing due to the user being used to run the processes. It was due to misconfiguration of the way that the virtual users were setup, which in the end I managed to fix by interrogating a server with a working implementation (albeit ver 1.x) which was similar to what I needed and copying parts of it's config. Please always post details of your problem and solution - us technical folks learn from people's mistakes, but it's not possible to learn and make things better without knowing what your problem and eventual solution were? Additionally note that this is an opensource project and the documentation is written by people like yourself. Please consider clarifying whatever original document put you on the wrong track? Although mutt now connects to it fine, roundcube doesn't, but don't worry. I'm not planning to bother you further. Well, IMAP is just IMAP no matter which server you are using, so don't treat this as some big black box that you can't open up and inspect. IMAP is a plain text protocol and it shouldn't scare a technical person to debug things. roundcube is also an extremely flexible beast and you will need to get certain key settings correct before it connects correctly, it likely also feels very brittle in that there aren't that many settings to get right, but if any are wrong you will get major breakage Good luck Ed W P.S. You came here with all guns blazing and seems like you are going to leave the same way? Why not try a more softly softly approach?
Re: [Dovecot] Just trying to make dovecot work.
On 06/08/2012 08:57, Oon-Ee Ng wrote: On Mon, Aug 6, 2012 at 3:48 PM, Ed W li...@wildgooses.com wrote: P.S. You came here with all guns blazing and seems like you are going to leave the same way? Why not try a more softly softly approach? Because the 'customer' has right to throw his weight around =). Especially after paying such a large amount of money for the product. Lets and avoid chasing folks away. Ed
Re: [Dovecot] Just trying to make dovecot work.
On 05/08/2012 06:22, Peter Snow wrote: Hi, I have to say that Dovecot is certainly the most challenging piece of software I've ever had the pleasure of setting up (due mainly to the reams of largely unhelpful documentation). After 36 almost non-stop hours reading and trying, I finally end up here. :-) I really would appreciate your help - and many thanks in advance! Phew, haven't you set yourself up for a hostile response..? It's only an opinion, but I would say that the Dovecot docs are rather helpful and thorough? Also dovecot ships with an almost working config out of the box, really you only need to adjust a couple of settings to achieve most setups. OK, reading your log files, I think this is probably the clue? /var/log/dovecot.log (showing unsuccessful login) *** Aug 04 21:32:41 IMAP(peter): Error: user peter: Couldn't drop privileges: User is missing UID (see mail_uid setting) Aug 04 21:32:41 IMAP(peter): Error: Internal error occurred. Refer to server log for more information. *** I don't use that auth method so I don't want to give you a definitive suggestion, but we can certainly use google to get some ideas: http://lmgtfy.com/?q=dovecot+mail_uid+ Third link down seems to cover your question. Basically says you need to define the setting listed above, but also why. Note, I think it's easy to level critique against dovecot auth, but if you look for a few moments longer you will see that you are probably just criticising flexibility. You can use a very wide array of database types to store your auth information and with that flexibility comes the requirement to actually define your specific choice. Some people run a multi-tennanted system and like to be able to run each user under their own uid, hence that being flexible. Others want to use LDAP or a database to store auth info (I think you can even use both at the same time). It's even possible to use both at the same time I believe, or to lookup users in one db, and passwords in another. Note, I don't know your requirements, but you might want to look at some kind of database for your user storage if you have more than a fairly simple installation? Either LDAP or sql is likely to give you more flexibility than a flat file pwdb, but I don't know your requirements, so just a thought Finally note that there are literally dozens of how to install dovecot guides on the internet that will help you get a working setup with various auth db choices. Once you understand the big picture using one of those guides you will be able to customise things to a very specific situation Good luck Ed W
Re: [Dovecot] Remove leading and trailing spaces from folder names?
On 25/07/2012 11:59, Timo Sirainen wrote: On 25.7.2012, at 13.54, Ralf Hildebrandt wrote: The way I'd do this is to just do doveadm mailbox list, put the strings through some regexps and doveadm rename if necessary. Repeat for all users. Yes, something along those lines. It's just that I find it hard to craft a regexp which does that. Maybe after the vacation. echo foo/ b a r / baz / sup | perl -pe 's, +/,/,g; s,/ +,/,g; s/^ +//; s/ +$//' Bet you can't pronounce all of the above ;-) Stack overflow on doing it in bash (probably would use the perl regexp above though, but only because I understand perl regexps better) http://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-bash-variable Ed W
Re: [Dovecot] Remove leading and trailing spaces from folder names?
On 19/07/2012 15:07, Ralf Hildebrandt wrote: * Ed W li...@wildgooses.com: On 19/07/2012 13:45, Ralf Hildebrandt wrote: Hi! Anybody got a doveadm script which can remove leading and trailing spaces from folder names? Right now we're migrating mailboxes from dovecot - Exchange, and Exchange cannot handle leading and trailing spaces in Folder names. Caveat that it might cause some wierd symptoms with clients, why not attack the dovecot mail backend and rename folders + sed the subscription files? That's what I asked for: Anybody got a doveadm script which can remove leading and trailing spaces from folder names? Oh, sorry. Why doveadm though? Why not attack the filesystem directly? Ed
Re: [Dovecot] Remove leading and trailing spaces from folder names?
On 19/07/2012 13:45, Ralf Hildebrandt wrote: Hi! Anybody got a doveadm script which can remove leading and trailing spaces from folder names? Right now we're migrating mailboxes from dovecot - Exchange, and Exchange cannot handle leading and trailing spaces in Folder names. Caveat that it might cause some wierd symptoms with clients, why not attack the dovecot mail backend and rename folders + sed the subscription files? Something like find | rename Good luck Ed W
Re: [Dovecot] bcypt availability
On 16/07/2012 11:05, Noel Butler wrote: On Sun, 2012-07-15 at 11:32 -0700, Robin wrote: Indeed. What I have seen is a create deal of variation in the configuration (/etc/login.defs or your distro's equivalent) in terms of making use of such things. I don't see any added value to bcrypt over iterated SHA-512, really, and bcrypt and scrypt are password hash's - they are designed to be slow md5/sha/sha2 are cryptographic hash's - they are designed to be fast But the hash under discussion is sha256crypt, which is a slow hash built using sha256 (there is also an sha512crypt) However, if you keep your database secure, yes, this means using competent coders, then it matters little what method you use. Yes, but the basis for our discussion is that decent companies with a security budget and reputation to protect have made mistakes, it would be foolish to assume that all our own machines are so much better... The topic is about assuming something goes wrong and a compromise occurs, ie security in depth Today the speeds on single cpus for bcrypt/sha512crypt are in under 1,000 checks per second kind of range, so given 4-8 cores per processor you end up with cracking around the under 10,000 checks per second kind of range. At present GPUs can test sha512 approx 5x faster than a multicore processor using the latest john the ripper code http://openwall.info/wiki/john/OpenCL-SHA-512 At present bcrypt on GPU is tested at around the same speed as a multicore processor, but a) it's often easier to add multiple GPUs to build a distributed cracker, b) there are estimated performance improvements possible with newer GPUs (bcrypt tries to muddle memory a lot to slow things down, but it doesn't actually do enough to prevent implementation on GPUs). A rough estimate suggested that an upper bound of up to a 10x performance improvement might be possible with the bcrypt on GPU code (probably less, that is a straight instruction for instruction estimate) So at present it seems like sha512crypt is slightly weaker than bcrypt, work will continue on sha256 on GPU in particular (bitcoin...) and can only get faster, possibly this work will benefit sha512 cracking speeds also. However, likely also bcrypt cracking speeds can be improved to within an order of magnitude of sha512 and so they are only a small constant multiple different in performance (change your work factor to make them equivalent...) So my opinion has gone back to being satisfied with sha512crypt. Unfortunately though sha512crypt with default 5,000 rounds is still being broken at rates of 10,000 checks a second on latest GPUs and I personally had a lot of success in the 1990s with dictionaries and breaking original DES crypt at 200 checks a second I think if possible it would be desirable to increase the default work factor to something higher than the default, 10,000 checks a second will give up a lot of real user passwords in a reasonable length of time (real users are going to have simple derivatives of dictionary words) Good luck Ed W
Re: [Dovecot] Last login datetime on accounts
On 16/07/2012 12:01, Robert Schetterer wrote: Am 16.07.2012 12:48, schrieb Charles Marcus: On 2012-07-16 2:45 AM, Robert Schetterer rob...@schetterer.org wrote: i have running touch with 3000 users, i dont see much overhead, anyway its true ,its not very elegant, perhaps i.e you may write some daily cron bash find script looking about latest timestamp of files in .new with maildir which of cource is not the same as last_login script, but may good enough for you Why not a simple post-login script that updates your userDB? http://wiki2.dovecot.org/PostLoginScripting i may fail, but in the orig question PostLoginScripting is allready used but with touch a last_login file so update some db with same mech may not much better i thought some other way then PostLoginScripting was searched for I have a similar desire, I would also like to log logout time. I think for lowest load you would want postloginscript to talk to some long running daemon process, which in turn caches logins and does sensible batching of updates. Timo commented that a plugin would be a good way to do all of the above: http://dovecot.org/patches/2.2/imap-logout-plugin.c I haven't had capacity to build anything yet... Cheers E dW
Re: [Dovecot] bcypt availability
On 7/12/12, Nick Edwards nick.z.edwa...@gmail.com wrote: Dear Timo, Do you intend to introduce bcrypt into the built in password schemes? In lew of all these hacks lately many larger companies appear moving this way, we are looking at it too, but dovecot will then be the weakest link in the database security. So, are you planning on this and if so what sort of timeframe / version would you expect it to be in beta ? Nik Interestingly, there doesn't seem to be so much difference between iterated sha-512 (sha512crypt) and bcrypt. Based on looking at latest john the ripper results (although I'm a bit confused because they don't seem to quote the baseline results using the normal default number of rounds?) So I think right now, many/most modern glibc are shipping with sha256/512crypt implementations (recently uclibc also added this). A small number ship with bcrypt (I have a patch for uclibc), which would mean that dovecot supported bcrypt out of the box. For everything else I guess you want a small application and use the checkpass dovecot method to do external checking? You could for example implement scrypt checking this way (although I think there is a risk of running out of server ram if you have many simultaneous logins..?) I previously thought I wanted bcrypt, but after some consideration I believe sha256/512crypt is likely sufficient for reasonable security Cheers Ed W
Re: [Dovecot] Howto add another disk storage
On 05/07/2012 11:33, Charles Marcus wrote: On 2012-07-05 5:45 AM, Kaya Saman kayasa...@gmail.com wrote: FreeBSD 8.2 x64 running on VMware Hi Kaya, Do you (or anyone else) know of any decent VMWare images (appliance) of current version of FreeBSD? I've been debating on switching from Gentoo to FreeBSD for a while now, and would love to find a ready made appliance (just basic uncustomized server install) that I could start with... We use Gentoo host + Linux-vservers (+grsec/pax) and very satisfied. Linux-vservers gives you something similar to jails, although it's meant to look a little more like full virtualisation than jails does (bear in mind I don't have jails experience though) There are plenty of tools included with linux-vservers to clone, build and maintain your individual machines. It's a complete enough virtualisation that you can say boot a centos image under your gentoo host (or whatever). It's also extremely lightweight, so there is almost zero overhead. It's a little weak on areas where you need direct access to hardware, but there are generally acceptable workarounds - also you can't run completely different operating systems since it's not a full virtualisation solution One nice benefit is that all images are just a directory containing your linux installation, so it's very easy to backup/snapshot/restore/drop in and fix something you bolloxed up/clone to a new machine. Just my 2p. Cheers Ed W
Re: [Dovecot] RAID1+md concat+XFS as mailstorage
On 29/06/2012 12:15, Charles Marcus wrote: On 2012-06-28 4:35 PM, Ed W li...@wildgooses.com wrote: On 28/06/2012 17:54, Charles Marcus wrote: RAID10 also statistically has a much better chance of surviving a multi drive failure than RAID5 or 6, because it will only die if two drives in the same pair fail, and only then if the second one fails before the hot spare is rebuilt. Actually this turns out to be incorrect... Curious, but there you go! Depends on what you mean exactly by 'incorrect'... I'm sorry, this wasn't meant to be an attack on you, I thought I was pointing out what is now fairly obvious stuff, but it's only recently that the maths has been popularised by the common blogs on the interwebs. Whilst I guess not everyone read the flurry of blog articles about this last year, I think it's due to be repeated in increasing frequency as we go forward: The most recent article which prompted all of the above is I think this one: http://queue.acm.org/detail.cfm?id=1670144 More here (BARF = Battle Against Raid 5/4) http://www.miracleas.com/BAARF/ There are some badly phrased ZDnet articles also if you google raid 5 stops working in 2009 Intel have a whitepaper which says: Intelligent RAID 6 Theory Overview And Implementation RAID 5 systems are commonly deployed for data protection in most business environments. However, RAID 5 systems only tolerate a single drive failure, and the probability of encountering latent defects [i.e. UREs, among other problems] of drives approaches 100 percent as disk capacity and array width increase. The upshot is that: - Drives often fail slowly rather than bang/dead - You will only scrub the array on a frequency F, which means that faults can develop since the last scrub (good on you if you actually remembered to set an automatic regular scrub...) - Once you decide to pull a disk for some reason to replace it, then with RAID1/5 (raid1 is a kind of degenerate form of raid5) you are exposed in that if a *second* error is detected during the rebuild then you are inconsistent and have no way to correctly rebuild your entire array - My experience is that linux-raid will stop the rebuild if a second error is detected during rebuild, but with some understanding it's possible to proceed (obviously understanding that data loss has therefore occurred). However, some hardware controllers will kick out the whole array if a rebuild error is discovered- some will not, but given the probability of a second error being discovered during rebuild is significantly non zero, it's worth worrying over this and figuring out what you do if it happens... I'm fairly sure that you do not mean that my comment that 'having a hot spare is good' is incorrect, Well, hotspare seems like a good idea, but the point is that the situation will be that you have lost parity protection. At that point you effectively run a disk scrub to rebuild the array. The probability of discovering a second error somewhere on your remaining array is non zero and hence your array has lost data. So it's not about how quickly you get the spare in, so much as the significant probability that you have two drives with errors, but only one drive of protection Raid6 increases this protection *quite substantially*, because if a second error is found on a stripe, then you still haven't lost data. However, a *third* error on a single stripe will lose data. The bad news: Estimates suggest that drive sizes will become large enough that RAID6 is insufficient to give a reasonable probability of successful repair of a single failed disk in around 7+ years time. So at that point there becomes a significant probability that the single failed disk cannot be successfully replaced in a RAID6 array because of the high probability of *two* additional defects becoming discovered on the same stripe of the remaining array. Therefore many folks are requesting 3 disk parity to be implemented (RAID7?) 'Sometimes'... '...under some circumstances...' - hey, it's all a crapshoot anyway, all you can do is try to make sure the dice aren't loaded against you. And to be clear - RAID5/RAID1 has a very significant probability that once your first disk has failed, in the process of replacing that disk you will discover an unrecoverable error on your remaining drive and hence you have lost some data... Also, modern enterprise SAS drives and RAID controllers do have hardware based algorithms to protect data integrity (much better than consumer grade drives at least). I can't categorically disagree, but I should check carefully your claims? My understanding is that there is minimal additional protection from enterprise stuff, and by that I'm thinking of quality gear that I can buy from the likes of newegg/ebuyer, not the custom SAN products from certain big name providers. It seems possible that the big name SAN providers implement additional
Re: [Dovecot] RAID1+md concat+XFS as mailstorage
On 28/06/2012 13:01, Костырев Александр Алексеевич wrote: Hello! somewhere in maillist I've seen RAID1+md concat+XFS being promoted as mailstorage. Does anybody in here actually use this setup? I've decided to give it a try, but ended up with not being able to recover any data off survived pairs from linear array when _the_first of raid1 pairs got down. This is the configuration endorsed by Stan Hoeppner. His description of the benefits is quite compelling, but real world feedback is interesting to achieve. Note that you wouldn't get anything back from a similar fail of a RAID10 array either (unless we are talking temporary removal and re-insertion?) Ed W
Re: [Dovecot] RAID1+md concat+XFS as mailstorage
On 28/06/2012 17:54, Charles Marcus wrote: On 2012-06-28 12:20 PM, Ed W li...@wildgooses.com wrote: Bad things are going to happen if you loose a complete chunk of your filesystem. I think the current state of the world is that you should assume that realistically you will be looking to your backups if you loose the wrong 2 disks in a raid1 or raid10 array. Which is a very good reason to have at least one hot spare in any RAID setup, if not 2. RAID10 also statistically has a much better chance of surviving a multi drive failure than RAID5 or 6, because it will only die if two drives in the same pair fail, and only then if the second one fails before the hot spare is rebuilt. Actually this turns out to be incorrect... Curious, but there you go! Search google for a recent very helpful expose on this. Basically RAID10 can sometimes tolerate multi-drive failure, but on average raid6 appears less likely to trash your data, plus under some circumstances it better survives recovering from a single failed disk in practice The executive summary is something like: when raid5 fails, because at that point you effectively do a raid scrub you tend to suddenly notice a bunch of other hidden problems which were lurking and your rebuild fails (this happened to me...). RAID1 has no better bad block detection than assuming the non bad disk is perfect (so won't spot latent unscrubbed errors), and again if you hit a bad block during the rebuild you loose the whole of your mirrored pair. So the vulnerability is not the first failed disk, but discovering subsequent problems during the rebuild. This certainly correlates with my (admittedly limited) experiences. Disk array scrubbing on a regular basis seems like a mandatory requirement (but how many people do..?) to have any chance of actually repairing a failing raid1/5 array Digressing, but it occurs there would be a potentially large performance improvement if spinning disks could do a read/rewrite cycle with the disk only moving a minimal distance (my understanding is this can't happen at present without a full revolution of the disk). Then you could rewrite parity blocks extremely quickly without re-reading a full stripe... Anyway, challenging problem and basically the observation is that large disk arrays are going to have a moderate tail risk of failure whether you use raid10 or raid5 (raid6 giving a decent practical improvement in real reliability, but at a cost in write performance). Cheers Ed W
Re: [Dovecot] Hardware infrastructure for email system
On 23/06/2012 13:20, Wojciech Puchar wrote: it is already enormous overshoot in hardware specs. And i do not really catch why you have 4 in parallel servers. And finally i cannot understand this dividing of servers just to merging it back using VMWare. because it is a big difference if you have anything in a single machine or splittet in virtual machines - you can move them at runtime to different hosts and if you run out of ressources ok - for me it is just likes. You have higher change to have the need to move at the first place doing this :) Actually, I'm a huge buyer of virtualisation. There is *no other* way that people should be running their servers right now... (hand waving sweeping generalisation - obviously add context, etc, before taking literally). There are various types of virtualisation solution and they have pros and cons, but I think there is close to zero reason not to use some kind of virtualisation option for all new deployments. Probably he is using something clever like vmware esx - I like the theory there where you can literally fail over a running machine to new hardware, without even stopping it running, very neat. I personally use linux-vservers which are almost identical to running on bare metal server (it's kind of a fancy form of chroot), this means I don't have commercial grade failover, but it only takes 5-15 seconds to reboot each container, so that's an acceptable downtime for my requirements. Good luck! Ed W
Re: [Dovecot] Dovecot performance under high load (vs. Courier)
On 23/06/2012 09:22, Wojciech Puchar wrote: Nearly all of them are non-caching. (I don't know of any caching ones.) At least roundcube (v0.7.1 here) has some caching options: --[excerpt from roundcubes main.inc.php]- // Type of IMAP indexes cache. Supported values: 'db', 'apc' and 'memcache'. $rcmail_config['imap_cache'] = null; // Enables messages cache. Only 'db' cache is supported. $rcmail_config['messages_cache'] = false; -[end] But I don't know, whether this is the sort of caching you are referring to. what's a point of caching imap, except your webmail service is not locally connected (localhost or LAN) to imap server? Asking for items 600-615 from a threaded list, sorted by something, can be an expensive operation, especially if you just asked for items 585-600 a moment ago? Ed
Re: [Dovecot] Dovecot performance under high load (vs. Courier)
On 21/06/2012 21:54, Reindl Harald wrote: and last but not least i have lesser entries in maillog which goes to a central mysql-server for self-developed web-interfaces I recently added imapproxy to my Roundcube installation. Benchmarks showed a very slight slowdown, but as you point out it reduced the login count from dovecot and I use a login script to kind of report last login / length of session and this tallies better with an imap desktop user now I think the conclusion is that imapproxy is not necessary. There are some advantages (eg with high network latency between web and imap server, and reducing apparent login count), and some disadvantages (extra complexity, slowdown) On average I think few users should use it.. Or at least benchmark and add it reluctantly... Ed
Re: [Dovecot] Dovecot performance under high load (vs. Courier)
On 21/06/2012 21:37, René Neumann wrote: Am 21.06.2012 22:22, schrieb Timo Sirainen: On Thu, 2012-06-21 at 13:05 -0700, email builder wrote: Do you know what webmails are caching vs. non-caching? Nearly all of them are non-caching. (I don't know of any caching ones.) At least roundcube (v0.7.1 here) has some caching options: --[excerpt from roundcubes main.inc.php]- // Type of IMAP indexes cache. Supported values: 'db', 'apc' and 'memcache'. $rcmail_config['imap_cache'] = null; // Enables messages cache. Only 'db' cache is supported. $rcmail_config['messages_cache'] = false; -[end] But I don't know, whether this is the sort of caching you are referring to. - René It is caching, but unless your mysql / memcache server is lower latency than your dovecot server, then the caching does very little. I tested it very briefly and it added a lot of latency to my results when adding a mysql cache. However, my setup has the mysql/dovecot/roundcube all on the same machine, so latency is minimal. Roughly I found that the amount of caching is absolutely massive, eg roughly subject headers, message ids and more for every message in every folder. This meant multiple seconds of latency on first login and then slight additional latency on every folder view. I guess this might breakeven in the situation of a roundcube installation in an office and dovecot on the far end of an ADSL line with 60-100ms+ of latency and bandwidth constraints, but it's really, really hard to see it's sensible for two machines in the same datacenter with an uncontended network connection between them This isn't to say that the caching isn't sensible for use with other mail servers, but I don't see it offers any benefit for most Dovecot installations? However, very clever and full featured webmail client! Ed W P.S. Sogo has a kind of caching in that it has a clientside javascript cache. Not what was meant, but for all practical purposes much more useful...
Re: [Dovecot] Can we know when a user read our email?
On 04/06/2012 15:14, Reindl Harald wrote: Am 04.06.2012 15:36, schrieb Ed W: Then tell them their only option is to buy Exchange Server and Outlook for everyone - but explain that this 'feature' *still* will not work for recipients that are outside of your control (ie, it will only work for local recipients - and I *think* it is possible to set up Trusts with other external Exchange Servers, but not sure, and if it does, it requires the explicit cooperation of the other systems admin). Bottom line: do NOT promise the impossible to a client just to win the business. It is a losing proposition, as you are beginning to see... We run small ISP selling mail accounts to customers. *our customers* want to voluntarily tell senders when they have downloaded an email via POP. and the sender for sure wants this too for every single message? i doubt not I'm not sure why this is so hard to believe. There is literally a class of customers that have a specification which says that there must be a notification sent back to the sender whenever they download their emails. I cannot currently bid for their business. A spec is a spec - either you can meet the spec or you can't bid for the business... Ed W
Re: [Dovecot] Can we know when a user read our email?
On 03/06/2012 14:46, Charles Marcus wrote: On 2012-06-03 4:43 AM, Ed W li...@wildgooses.com wrote: Look, I can argue against the idea easily, personally my objection is mail loops, but the point is that the customer demands it, and at present that prevents me bidding for certain types of business... Basically the customer just wants to repro what they got with Exchange Then tell them their only option is to buy Exchange Server and Outlook for everyone - but explain that this 'feature' *still* will not work for recipients that are outside of your control (ie, it will only work for local recipients - and I *think* it is possible to set up Trusts with other external Exchange Servers, but not sure, and if it does, it requires the explicit cooperation of the other systems admin). Bottom line: do NOT promise the impossible to a client just to win the business. It is a losing proposition, as you are beginning to see... You have the situation backwards. I think you know about the MailASail business. We run small ISP selling mail accounts to customers. *our customers* want to voluntarily tell senders when they have downloaded an email via POP. The basic requirement is when the message is accessed via POP, then the sender (presumably defined by the FROM address) is sent a notification. Please don't argue about the spam aspects, etc - we are all on the same page here. However, it's not an entirely foolish request - because the customer is on dialup MDN implemented by the mail client isnt really feasible, and DSN doesn't help us realise that the remote user has at least connected and accessed the mail. So they are kind of asking for a limited server side implementation of MDN. In fact this isn't that unreasonable, it's just problematic and unusual. Ed W
Re: [Dovecot] Can we know when a user read our email?
On 03/06/2012 18:26, Reindl Harald wrote: Am 03.06.2012 19:21, schrieb Michael Orlitzky: On 06/03/12 12:06, Robert Schetterer wrote: Am 03.06.2012 16:24, schrieb Michael Orlitzky: I for one think the plugin is a good idea. what the hell , should the plugin do and how ? there is smtp dsn, nothing more makes sense looking to the thread subject , you need to have new internet standard called braindump over tcp this doesnt exist on exchange too mail is smtp, dovecot is no smtp server You could trigger on the 'seen' flag, and Dovecot is more than capable of generating messages, especially to mailboxes under its control (see: sieve) and now tell us how you connect YOUR sent message over SMTP to any seen fleeg of another user? I think we are talking cross purposes about the design here In my case I have a customer base on *dialup* who connect very infrequently. They kind of want MDN to work, however, at least my understanding is that this is typically implemented by first the MUA downloading all messages, then generating MDN responses which need to be sent out - however, in the case of dialup this may be very far after the fact. Therefore they request a kind of server side MDN. So when the message is downloaded from the POP server, the POP server generates some form of MDN-a-like response on their behalf. There are clearly limitations here, but equally the limitations are quite clearly explained - all we learn is that the message was downloaded, but in the case of very infrequent dialup users, this at least teaches us the earliest time that the user could have read the message. Many of these users are corporate and have defined processes, so they may require the user to actually read and action all the emails which have been downloaded, hence it might be inferred that usually the message will be read soon after we learn it's downloaded - I don't think the goal is to get 100% knowledge of read time though, just an estimate and that it did actually arrive at this remote user is helpful To put some meat on this type of user, we are talking about a group of users who might be mid-ocean or perhaps hanging around north/south pole or somewhere similarly remote. They would be using satellite dialup devices which have significant costs. So for example if we see the user dial in we learn: - They aren't dead... - With some confidence that the message has crossed the most uncertain part of the link and is at least now close enough to the user we just need to hope they actually read it - This type of user is typically only receiving a small handful of messages. At 2.4Kbit you are struggling to receive emails, it's not assume that this type of user is getting the kind of volumes that you or I get This is a niche user, however, I think the basic feature is actually not entirely stupid. My competitors implement this feature quite crudely with just a generic message mailed out to the sender the first time the recipient (ie on our server) accesses and downloads and accesses the email. I don't see anyone trying to send MDN compatible receipts, they literally just send a Your message was downloaded by the recipient message Cheers Ed W
Re: [Dovecot] Can we know when a user read our email?
On 03/06/2012 09:06, Linda Walsh wrote: Ed W wrote: Just to register interest, but at some point I will need to consider writing a plugin or similar to achieve exactly this. Situation is that several of our competitors offer such a feature, ie known pool of users on dialup or intermittently connected systems, provide an alert back to the sender when your email has been accessed/downloaded by the remote user. --- My dentist used a service that claimed to provide a read-notification. It was just an embedded web-bug in the email that I could choose to display or not ... if the client doesn't want to cooperate, you can't tell when the person read it. All you could do is tell when a client downloaded it from dovecot...which doesn't say much for clients that are left on 24/7... Please folks - don't argue with me - I'm the wrong person! The recipient who is receiving these emails, ie the person being bugged is demanding that they are buggable. If they demand it and it's a requirement for providing them service then I have to give it to them if I want the business. The users are on satellite dialup and barely have enough bandwidth to download a few KB of emails, they certainly can't trigger web bugs to trigger read receipts. Look, I can argue against the idea easily, personally my objection is mail loops, but the point is that the customer demands it, and at present that prevents me bidding for certain types of business... Basically the customer just wants to repro what they got with Exchange Cheers for ideas though! Ed W
Re: [Dovecot] interesting stats pattern
On 29/05/2012 19:13, Timo Sirainen wrote: On 29.5.2012, at 21.03, Cor Bosman wrote: es, I am getting a list of sessions/users every 5 minutes through cron. Im already using doveadm stats dump session/user connected Actually that's not really correct behavior either, since it ignores all the connections that happened during the 5 minutes if they don't exist at the time when you're asking for them. I'm not sure what the most correct way to do this kind of a graph would be :) I muttered about some ideas for enhanced login/logout tracking some months back. Perhaps this would be another example of a motivation to use it for something? Could either the login scripting or a plugin be used to build this type of login tracking? (My goal is to eventually do per user are you logged in tracking) Just a thought Ed W
Re: [Dovecot] Strange Dovecot 2.0.20 auth chokes and cores
On 30/05/2012 17:25, Alan Brown wrote: Is any problem with epoll on 3.2.x kernels? Yes - and it's been discussed here. Some bright spark rewrote the kernel epoll code to prevent DoS attacks caused by excessive forking. Do you have a link to the previous discussions? This is new to me? Can't find it immediately in the list? Cheers Ed W
Re: [Dovecot] Can we know when a user read our email?
On 14/05/2012 17:38, Timo Sirainen wrote: On Mon, 2012-05-14 at 08:56 -0700, Beto Moreno wrote: I have seen some emails servers that if I send a email to other person I can see if that person have read our emails and with a option to delete the email if the person hasn't read our email. Does dovecot have some like this feature? This doesn't really work with IMAP/POP3 protocols. It requires Exchange or something else. What would be possible is to check if a user has _downloaded_ your message, but many clients download messages immediately when they arrive so it might not be very useful. And in any case Dovecot has no such feature. Just to register interest, but at some point I will need to consider writing a plugin or similar to achieve exactly this. Situation is that several of our competitors offer such a feature, ie known pool of users on dialup or intermittently connected systems, provide an alert back to the sender when your email has been accessed/downloaded by the remote user. Personally I don't think it's a great feature and my competitor's implementations often cause mail loops and other nasties. However, bottom line is that you can't win the bid if you can't offer the feature... Feels like a plugin rather than core functionality, but would be cool if someone wanted to produce something... Cheers Ed W
Re: [Dovecot] Released Pigeonhole v0.3.1 for Dovecot v2.1.6
On 25/05/2012 23:12, Stephan Bosch wrote: The biggest change is the addition of dict support for Sieve script retrieval. It now possible to fetch Sieve scripts from an SQL database using the Dovecot dict facility. Read the INSTALL file and the referenced additional documentation for more information. Note that this feature currently is not usable with sieve_before/sieve_after and ManageSieve. This is very interesting! In fact on reflection, I would very much like sql decided before/after and a disk based main sieve script. Or phrased in terms of usage: I have groups of users where we have a predefined bunch of filtering that happens on their account. At the moment the users are grouped into top level directories so that the home and hence default scripts can cascade down. However, it means it's not trivial to adjust the grouping of the users and requires on disk placement to be meaningful. I would desire to find a way of when Postfix delivers a mail for a user X that this will run a bunch of predefined filtering scripts which are per-user, plus the users normal scripts. All scripts would normally live on disk Perhaps this is actually more easily done a different way? Thanks for any thoughts? Ed W
Re: [Dovecot] Released Pigeonhole v0.3.1 for Dovecot v2.1.6
On 27/05/2012 14:00, Daniel Parthey wrote: Hi Ed, Ed W wrote: I have groups of users where we have a predefined bunch of filtering that happens on their account. At the moment the users are grouped into top level directories so that the home and hence default scripts can cascade down. However, it means it's not trivial to adjust the grouping of the users and requires on disk placement to be meaningful. I would desire to find a way of when Postfix delivers a mail for a user X that this will run a bunch of predefined filtering scripts which are per-user, plus the users normal scripts. All scripts would normally live on disk Perhaps this is actually more easily done a different way? Would it be possible to do conditional includes in the global before script, something like this? if domain :matches foo.example.org { include foo.sieve } elsif domain :matches bar.example.org { include bar.sieve } That probably works for my current situation which is mainly grouping by domain. But I wanted to allow specific users to opt out and so ideally I want the include to be on a per user basis, not just a per domain. Good idea though - thanks Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 14/04/2012 04:48, Stan Hoeppner wrote: On 4/13/2012 10:31 AM, Ed W wrote: You mean those answers like: you need to read 'those' articles again Referring to some unknown and hard to find previous emails is not the same as answering? No, referring to this: On 4/12/2012 5:58 AM, Ed W wrote: The claim by ZFS/BTRFS authors and others is that data silently bit rots on it's own. Is it not a correct assumption that you read this in articles? If you read this in books, scrolls, or chiseled tablets, my apologies for assuming it was articles. WHAT?!! The original context was that you wanted me to learn some very specific thing that you accused me of misunderstanding, and then it turns out that the thing I'm supposed to learn comes from re-reading every email, every blog post, every video, every slashdot post, every wiki, every ... that mentions ZFS's reason for including end to end checksumming?!! Please stop wasting our time and get specific You have taken my email which contained a specific question, been asked of you multiple times now and yet you insist on only answering irrelevant details with a pointed and personal dig on each answer. The rudeness is unnecessary, and your evasiveness of answers does not fill me with confidence that you actually know the answer... For the benefit of anyone reading this via email archives or whatever, I think the conclusion we have reached is that: modern systems are now a) a complex sum of pieces, any of which can cause an error to be injected, b) the level of error correction which was originally specified as being sufficient is now starting to be reached in real systems, possibly even consumer systems. There is no solution, however, the first step is to enhance detection. Various solutions have been proposed, all increase cost, computation or have some disadvantage - however, one of the more promising detection mechanisms is an end to end checksum, which will then have the effect of augmenting ALL the steps in the chain, not just one specific step. As of today, only a few filesystems offer this, roll on more adopting it Regards Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 14/04/2012 04:31, Stan Hoeppner wrote: On 4/13/2012 10:31 AM, Ed W wrote: On 13/04/2012 13:33, Stan Hoeppner wrote: In closing, I'll simply say this: If hardware, whether a mobo-down SATA chip, or a $100K SGI SAN RAID controller, allowed silent data corruption or transmission to occur, there would be no storage industry, and we'll all still be using pen and paper. The questions you're asking were solved by hardware and software engineers decades ago. You're fretting and asking about things that were solved decades ago. So why are so many people getting excited about it now? So many? I know of one person getting excited about it. You love being vague don't you? Go on, I'll bite again, do you mean yourself? :-) Data densities and overall storage sizes and complexity at the top end of the spectrum are increasing at a faster rate than the consistency/validation mechanisms. That's the entire point of the various academic studies on the issue. Again, you love being vague. By your dismissive academic studies phrase, do you mean studies done on a major industrial player, ie NetApp in this case? Or do you mean that it's rubbish because they asked someone with some background in statistics to do the work, rather than asking someone sitting nearby in the office to do it? I don't think the researcher broke into NetApp to do this research, so we have to conclude that the industrial partner was onboard. NetApp seem to do a bunch of engineering of their own (got enough patents..) that I think we can safely assume they very much do their own research on this and it's not just academic... I doubt they publish all their own internal research, be thankful you got to see some of the results this way... Note that the one study required a sample set of 1.5 million disk drives. If the phenomenon were a regular occurrence as you would have everyone here believe, they could have used a much smaller sample set. Sigh... You could criticise the study if it had a small number of drives as being under-representive and now you criticise a large study for having too many observations... You cannot have too many observations when measuring a small and unpredictable phenomena... Where does it say that they could NOT have reproduced this study with just 10 drives? If you have 1.5 million available, why not use all the results?? Ed, this is an academic exercise. Academia leads industry. Almost always has. Academia blows the whistle and waves hands, prompting industry to take action. Sigh... We are back to the start of the email thread again... Gosh you seem to love arguing and muddying the water for zero reason but to have the last word? It's *trivial* to do a google search and hit *lots* of reports of corruptions in various parts of the system, from corrupting drivers, to hardware which writes incorrectly, to operating system flaws. I just found a bunch more in the Redhat database today while looking for something else. You yourself are very vocal on avoiding certain brands of HD controller which have been rumoured to cause corrupted data... (and thankyou for revealing that kind of thing - it's very helpful) Don't veer off at a tangent now: The *original* email this has spawned is about a VERY specific point. RAID1 appears to offer less protection against a class of error conditions than does RAID6. Nothing more, nothing less. Don't veer off and talk about the minutiae of testing studies at universities, this is a straightforward claim that you have been jumping around and avoiding answering with claims of needing to educate me on SCSI protocols and other fatuous responses. Nor deviate and discuss that RAID6 is inappropriate for many situations - we all get that... There is nothing normal users need to do to address this problem. ...except sit tight and hope they don't loose anything important! :-) Having the prestigious degree that you do, you should already understand the relationship between academic research and industry, and the considerable lead times involved. I'm guessing you haven't attended higher education then? You are confusing graduate and post-graduate systems... Byee Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 13/04/2012 12:51, Timo Sirainen wrote: - Use the checksums to assist with replication speed/efficiency (dsync or custom imap commands) It would be of some use with dbox index rebuilding. I don't think it would help with dsync. .. - File RFCs for new imap features along the lemonde lines which allow clients to have faster recovery from corrupted offline states... Too much trouble, no one would implement it :) I presume you have seen that cyrus is working on various distributed options? Standardising this through imap might work if they also buy into it? - Storage backends where emails are redundantly stored and might not ALL be on a single server (find me the closest copy of email X) - derivations of this might be interesting for compliance archiving of messages? - Fancy key-value storage backends might use checksums as part of the key value (either for the whole or parts of the message) GUID would work for these as well, without the possibility of a hash collision. I was thinking that the win for key-value store as a backend is if you can reduce the storage requirements or do better placement of the data (mail text replicated widely, attachments stored on higher latency storage?). Hence whilst I don't see this being a win with current options, if it were done then it would almost certainly be per mime part, eg storing all large attachments in one place and the rest of the message somewhere else, perhaps with different redundancy levels per type OK, this is all completely pie in the sky. Please don't build it! All I meant was that these are the kind of things that someone might one day desire to do and hence they would have competing requirements for what to checksum... Cheers Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 13/04/2012 13:21, Timo Sirainen wrote: On 13.4.2012, at 15.17, Ed W wrote: On 13/04/2012 12:51, Timo Sirainen wrote: - Use the checksums to assist with replication speed/efficiency (dsync or custom imap commands) It would be of some use with dbox index rebuilding. I don't think it would help with dsync. .. - File RFCs for new imap features along the lemonde lines which allow clients to have faster recovery from corrupted offline states... Too much trouble, no one would implement it :) I presume you have seen that cyrus is working on various distributed options? Standardising this through imap might work if they also buy into it? Probably more trouble than worth. I doubt anyone would want to run a cross-Dovecot/Cyrus cluster. No definitely not. Sorry I just meant that you are both working on similar things. Standardising the basics that each use might be useful in the future That can almost be done already .. the attachments are saved and accessed via a lib-fs API. It wouldn't be difficult to write a backend for some key-value databases. So with about one day's coding you could already have Dovecot save all message attachments to a key-value db, and you can configure redundancy in the db's configs. Hmm, super. Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 13/04/2012 06:29, Stan Hoeppner wrote: On 4/12/2012 5:58 AM, Ed W wrote: The claim by ZFS/BTRFS authors and others is that data silently bit rots on it's own. The claim is therefore that you can have a raid1 pair where neither drive reports a hardware failure, but each gives you different data? You need to read those articles again very carefully. If you don't understand what they mean by 1 in 10^15 bits non-recoverable read error rate and combined probability, let me know. OK, I'll bite. I only have an honours degree in mathematics from a well known university, so grateful if you could dumb it down appropriately? Lets start with what those articles are you referring to? I don't see any articles if I go literally up the chain from this email, but you might be talking about any one of the lots of other emails in this thread or even some other email thread? Wikipedia has it's faults, but it dumbs the silent corruption claim down to: http://en.wikipedia.org/wiki/ZFS an undetected error for every 67TB And a CERN study apparently claims far higher than one in every 10^16 bits Now, I'm NOT professing any experience of axe to grind here. I'm simply asking by what feature do you believe either software or hardware RAID1 is capable of detecting which pair is correct when both pairs of a raid one disk return different results and there is no hardware failure to clue us that one pair suffered a read error? Please don't respond with a maths pissing competition, it's an innocent question about what levels of data checking are done on each piece of the hardware chain? My (probably flawed) understanding is that popular RAID 1 implementations don't add any additional sector checksums over and above what the drives/filesystem/etc add already offer - is this the case? And this has zero bearing on RAID1. And RAID1 reads don't work the way you describe above. I explained this in some detail recently. Where? Been working that way for more than 2 decades Ed. :) Note that RAID1 has that 1 for a reason. It was the first RAID level. What should I make of RAID0 then? Incidentally do you disagree with the history of RAID evolution on Wikipedia? http://en.wikipedia.org/wiki/RAID Regards Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
there exists hardware on sale with *known* defects. Despite that the industry continues without collapse. Now you claim that if corruption is silent and people only tend to notice it much later and under certain edge conditions that this can't be possible because it should cause the industry to collapse..??? ...Not buying your logic... Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 12/04/2012 11:20, Stan Hoeppner wrote: On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote: On 4/12/12, Stan Hoeppners...@hardwarefreak.com wrote: On 4/11/2012 11:50 AM, Ed W wrote: One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the event of bad blocks. (I'm not sure what actually happens when md scrubbing finds a bad sector with raid1..?). For low performance requirements I have become paranoid and been using RAID6 vs RAID10, filesystems with sector checksums seem attractive... Except we're using hardware RAID1 here and mdraid linear. Thus the controller takes care of sector integrity. RAID6 yields nothing over RAID10, except lower performance, and more usable space if more than 4 drives are used. How would the control ensure sector integrity unless it is writing additional checksum information to disk? I thought only a few filesystems like ZFS does the sector checksum to detect if any data corruption occurred. I suppose the controller could throw an error if the two drives returned data that didn't agree with each other but it wouldn't know which is the accurate copy but that wouldn't protect the integrity of the data, at least not directly without additional human intervention I would think. When a drive starts throwing uncorrectable read errors, the controller faults the drive and tells you to replace it. Good hardware RAID controllers are notorious for their penchant to kick drives that would continue to work just fine in mdraid or as a single drive for many more years. The mindset here is that anyone would rather spent $150-$2500 dollars on a replacement drive than take a chance with his/her valuable data. I'm asking a subtlely different question. The claim by ZFS/BTRFS authors and others is that data silently bit rots on it's own. The claim is therefore that you can have a raid1 pair where neither drive reports a hardware failure, but each gives you different data? I can't personally claim to have observed this, so it remains someone else's theory... (for background my experience is simply: RAID10 for high performance arrays and RAID6 for all my personal data - I intend to investigate your linear raid idea in the future though) I do agree that if one drive reports a read error, then it's quite easy to guess which pair of the array is wrong... Just as an aside, I don't have a lot of failure experience. However, the few I have had (perhaps 6-8 events now) is that there is a massive correlation in failure time with RAID1, eg one pair I had lasted perhaps 2 years and then both failed within 6 hours of each other. I also had a bad experience with RAID 5 that wasn't being scrubbed regularly and when one drive started reporting errors (ie lack of monitoring meant it had been bad for a while), the rest of the array turned out to be a patchwork of read errors - linux raid then turns out to be quite fragile in the presence of a small number of read failures and it's extremely difficult to salvage the 99% of the array which is ok due to the disks getting kicked out... (of course regular scrubs would have prevented getting so deep into that situation - it was a small cheap nas box without such features) Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 12/04/2012 02:18, Stan Hoeppner wrote: On 4/11/2012 11:50 AM, Ed W wrote: Re XFS. Have you been watching BTRFS recently? I will concede that despite the authors considering it production ready I won't be using it for my servers just yet. However, it's benchmarking on single disk benchmarks fairly similarly to XFS and in certain cases (multi-threaded performance) can be somewhat better. I haven't yet seen any benchmarks on larger disk arrays yet, eg 6+ disks, so no idea how it scales up. Basically what I have seen seems competitive Links? http://btrfs.ipv5.de/index.php?title=Main_Page#Benchmarking See the regular Phoronix benchmarks in particular. However, I believe these are all single disk? I don't have such hardware spare to benchmark, but I would be interested to hear from someone who benchmarks your RAID1+linear+XFS suggestion, especially if they have compared a cutting edge btrfs kernel on the same array? http://btrfs.boxacle.net/repository/raid/history/History_Mail_server_simulation._num_threads=128.html This is with an 8 wide LVM stripe over 8 17 drive hardware RAID0 arrays. If the disks had been setup as a concat of 68 RAID1 pairs, XFS would have turned in numbers significantly higher, anywhere from a 100% increase to 500%. My instinct is that this is an irrelevant benchmark for BTRFS because its performance characteristics for these workloads have changed so significantly? I would be far more interested in a 3.2 and then a 3.6/3.7 benchmark in a years time In particular recent benchmarks on Phoronix show btrfs exceeding XFS performance on heavily threaded benchmarks - however, I doubt this is representative of performance on a multi-disk benchmark? It would be nice to see these folks update these results with a 3.2.6 kernel, as both BTRFS and XFS have improved significantly since 2.6.35. EXT4 and JFS have seen little performance work since. My understanding is that there was a significant multi-thread performance boost for EXT4 in the last year kind of timeframe? I don't have a link to hand, but someone did some work to reduce lock contention (??) which I seem to recall made a very large difference on multi-user or multi-cpu workloads? I seem to recall that the summary was that it allowed Ext4 to scale up to a good fraction of XFS performance on medium sized systems? (I believe that XFS still continues to scale far better than anything else on large systems) Point is that I think it's a bit unfair to say that little has changed on Ext4? It still seems to be developing faster than maintenance only However, well OT... The original question was: anyone tried very recent BTRFS on a multi-disk system. Seems like the answer is no. My proposal is that it may be worth watching in the future Cheers Ed W P.S. I have always been intrigued by the idea that a COW based filesystem could potentially implement much faster RAID parity, because it can avoid reading the whole stripe. The idea is that you treat unallocated space as zero, which means you can compute the incremental parity with only a read/write of the checksum value (and with a COW filesystem you only ever update by rewriting to new zero'd space). I had in mind something like a fixed parity disk (RAID4?) and allowing the parity disk to be write behind cached in ram (ie exposed to risk of: power fails AND data disk fails at the same time). My code may not be following along for a while though...
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
On 12/04/2012 12:09, Timo Sirainen wrote: On 12.4.2012, at 13.58, Ed W wrote: The claim by ZFS/BTRFS authors and others is that data silently bit rots on it's own. The claim is therefore that you can have a raid1 pair where neither drive reports a hardware failure, but each gives you different data? That's one reason why I planned on adding a checksum to each message in dbox. But I forgot to actually do that. I guess I could add it for new messages in some upcoming version. Then Dovecot could optionally verify the checksum before returning the message to client, and if it detects corruption perhaps automatically read it from some alternative location (e.g. if dsync replication is enabled ask from another replica). And Dovecot index files really should have had some small (8/16/32bit) checksums of stuff as well.. I have to say - I haven't actually seen this happen... Do any of your big mailstore contacts observe this, eg rackspace, etc? I think it's worth thinking about the failure cases before implementing something to be honest? Just sticking in a checksum possibly doesn't help anyone unless it's on the right stuff and in the right place? Off the top of my head: - Someone butchers the file on disk (disk error or someone edits it with vi) - Restore of some files goes subtly wrong, eg tool tries to be clever and fails, snapshot taken mid-write, etc? - Filesystem crash (sudden power loss), how to deal with partial writes? Things I might like to do *if* there were some suitable checksums available: - Use the checksum as some kind of guid either for the whole message, the message minus the headers, or individual mime sections - Use the checksums to assist with replication speed/efficiency (dsync or custom imap commands) - File RFCs for new imap features along the lemonde lines which allow clients to have faster recovery from corrupted offline states... - Single instance storage (presumably already done, and of course this has some subtleties in the face of deliberate attack) - Possibly duplicate email suppression (but really this is an LDA problem...) - Storage backends where emails are redundantly stored and might not ALL be on a single server (find me the closest copy of email X) - derivations of this might be interesting for compliance archiving of messages? - Fancy key-value storage backends might use checksums as part of the key value (either for the whole or parts of the message) The mail server has always looked like a kind of key-value store to my eye. However, traditional key-value isn't usually optimised for streaming reads, hence dovecot seems like a key value store, optimised for sequential high speed streaming access to the key values... Whilst it seems increasingly unlikely that a traditional key-value store will work well to replace say mdbox, I wonder if it's not worth looking at the replication strategies of key-value stores to see if those ideas couldn't lead to new features for mdbox? Cheers Ed W
Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?
Re XFS. Have you been watching BTRFS recently? I will concede that despite the authors considering it production ready I won't be using it for my servers just yet. However, it's benchmarking on single disk benchmarks fairly similarly to XFS and in certain cases (multi-threaded performance) can be somewhat better. I haven't yet seen any benchmarks on larger disk arrays yet, eg 6+ disks, so no idea how it scales up. Basically what I have seen seems competitive I don't have such hardware spare to benchmark, but I would be interested to hear from someone who benchmarks your RAID1+linear+XFS suggestion, especially if they have compared a cutting edge btrfs kernel on the same array? One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the event of bad blocks. (I'm not sure what actually happens when md scrubbing finds a bad sector with raid1..?). For low performance requirements I have become paranoid and been using RAID6 vs RAID10, filesystems with sector checksums seem attractive... Regards Ed W
Re: [Dovecot] Authentication mechanism and Password scheme
On 10/04/2012 08:11, Timo Sirainen wrote: On 10.4.2012, at 5.37, Костырев Александр Алексеевич wrote: Good day! I'm just trying to figure out that my understanding of subject is correct. So, if I want to store passwords in my database encrypted with SSHA512 scheme, my only choice for Authentication mechanism is plaintext? Yeah, that's correct. Does dovecot 2.0 also support SCRAM-SHA? I only mention because it's come up on my radar recently and as I understand it, it solves the issue of either having - plain text db of passwords, encrypted login - encrypted db of passwords, plaintext login With SCRAM you have both sides encrypted. (Clearly it's also desirable that the hash algorithm is well chosen to be resistant to bruteforce, so some might argue that bcrypt/scrypt is even more desirable since there is not yet a GPU implementation - However, at least SHA is a decent stab at things) Can you confirm my understanding is correct? Next question is whether any current mail client supports SCRAM..? Regards Ed W
Re: [Dovecot] dsync is SLOW compared to rsync
On 24/03/2012 13:21, Maarten Bezemer wrote: On Fri, 23 Mar 2012, Jeff Gustafson wrote: That didn't seem to make much of a difference. On a 3.1GB backup it shaved off 5 seconds. dsync's time was over 6 minutes with or without the mail_fsync=never. rsync copied the same 3.1GB mailbox in 15 seconds. It seems to me that dsync *should* be able to be just as fast, but it currently is spending way too much time doing something. What is it? Syncing 3.1GB in 15 seconds would require a speed of more than 200MB per second. Depending on the harddisks used, that would be quite a challenge. rsync is only going to transfer files it believes has changed, so the transfer bandwidth will likely be lower If you use rsync to only transfer the files that changed (based on file modification time) you may or may not miss files that have changed but still have the same time stamp. I assume you didn't use the --checksum parameter to rsync, right? Dovecot is not very resiliant to files changing under it, but without the filename changing. I have no idea if it's supposed to work at all, but you might at least expect to see problems if you start doing this? dsync does so much more than simply copy some files... Quite probably, but I don't think your expose above illustrates this? Regards Ed W
Re: [Dovecot] delivering with maildrop
On 23/03/2012 14:02, Stan Hoeppner wrote: On 3/23/2012 6:41 AM, Radim Kolar wrote: Can somebody provide maildrop syntax for using deliver-lda as final delivery program during sorting mail in user mailfilter? i mean replacement for to statement if ( /^(To|Cc):.*dovecot@dovecot.org/:h ) { to $MAIL/.dovecot/ } Dovecot's local delivery agent uses the Sieve language: http://wiki.dovecot.org/LDA/Sieve The syntax is quite different than maildrop or procmail. I think that's why he asked the question? I presume he wants to filter first with maildir, then actually deliver using the dovecot delivery agent? In answer to the OP: read the maildropex man pages, but you have several options, eg: to | someprogram or: xfilter someprogram `someprogram` However, almost certainly I think you want the top option? Good luck Ed W
Re: [Dovecot] [Solved] Another hint from the clue box 8-) imapc/imap proxy user mailbox server location
On 14/03/2012 10:58, Charles Marcus wrote: On 2012-03-13 6:29 PM, Terry Carmen te...@cnysupport.com wrote: I'm going to hope everything is OK for a while, since my goal is to retire all the old Exchange servers and move all the users to dovecot/maildir within the next couple of months. However it's always nice to know there are options. 8-) I'm currently looking at rolling out SOGo as part of a major reworking of their current infrastructure (will also include converting their old Courier-IMAP to dovecot 2.1.x among other things)... SOGo, as far as I can tell, is the best truly free and open source 'exchange clone' available that works extremely well with Thunderbird+Lightning (which is what my Client uses currently, but they are very dissatisfied with using Google Calendar for Shared calendars), Outlook and Apple Apps, as well as Android, Blackberry and Apple mobile devices - and their upcoming v2 (in beta now) will not only provide native Outlook support (no plugin needed), it will also (optionally) provide a Samba4 Active Directory server in my main Client's office - all with absolutely no licenses required. Commercial support is available from Inverse, the company created by the developers to provide said support services. I also learned something very interesting yesterday concerning SOGo and dovecot during a sales call with a SOGo rep, but I'll wait and see if Timo cares to chime in on this one... ;) If the answer is that he will write a Z-Push/Activesync module for SOGo then I'm all ears! I have been watching SOGo for some time and the main thing I would miss is that every phone I have ever owned has largely limited/broken Funambol based sync and annoyingly working Activesync capability (I own a stream of Nokias...). It seems that although I don't like it, I need activesync support if I want my contacts/calendar on my phone... (I think I can do caldav on some of them, but not cardav on my N9) Apart from that it's a very neat system! Ed W
Re: [Dovecot] Just in time AV scanning
On 15/03/2012 10:33, Timo Sirainen wrote: On Wed, 2012-03-14 at 16:51 -0700, Kelsey Cummings wrote: I'm curious if anyone has any plugins for AV integration directly into dovecot. Our old pop servers have been scanning messges as they're moved from new-cur in the inbox and, at least where user's aren't poping every few seconds, there is occasionally enough time between scanning through the MXs to message retreval to snag a few more virues with updated definitions before they reach customers. Anyone doing anything similar? http://dovecot.org/patches/2.1/mail-filter.tar.gz allows you to run a script that modifies a mail while it's being read. You could make it run a virus check, and if that happens you could change the virus MIME part to be full of spaces (better not to change message size, line count or MIME structure). Couple of other ideas: 1) Could use one of the (buggy and variously unsupported) on access virus scanners. I think Dazuko is now abandoned, but this is a new one mentioned via the Clamav site: http://www.fsl.cs.sunysb.edu/docs/avfs-security04/index.html 2) Extremely racey, but if you were on maildir you could use some kind of pre-login scripting to kick off a scan on login. Touch some lock file so that you can tell when last scanned and only scan if the definitions have been updated since you last scanned? 3) There are some POP proxies which offer inline virus scanning. Could place one in front of your mail server. Presumably this will expose you to all the bugs in that proxy... Good luck Ed W
Re: [Dovecot] [Solved] Another hint from the clue box 8-) imapc/imap proxy user mailbox server location
On 16/03/2012 15:45, Charles Marcus wrote: On 2012-03-16 11:22 AM, Ed W li...@wildgooses.com wrote: If the answer is that he will write a Z-Push/Activesync module for SOGo then I'm all ears! I have been watching SOGo for some time and the main thing I would miss is that every phone I have ever owned has largely limited/broken Funambol based sync and annoyingly working Activesync capability (I own a stream of Nokias...). It seems that although I don't like it, I need activesync support if I want my contacts/calendar on my phone... (I think I can do caldav on some of them, but not cardav on my N9) While I agree it would be nice, why not just switch to a supported phone and be done with it? ;) When we roll out SOGo, we'll only be supporting the officially supported mobile clients (android, iphone/ipad, blackberry and windows mobile)... That implies you will be using cardav/caldav on those phones? I thought Android support was quite weak for those? I definitely don't like the idea of supporting activesync, but it seems like the only widely supported solution to pushing calendar and contacts updates to clients? Caldav gets you part of the way there, but cardav seems badly supported and there is no push support with either... Out of curiousity, what kind of performance are you getting out of the web interface and any tricks you used to improve perceived performance? My quick testing gave something circa 150-200ms response times from SOGo (forget exactly now) and as a result it was perceivable and just very slightly laggy (versus a desktop mail program!!). I get slightly better perceived performance from Roundcube (which also seems more amenable to building extension plugins) Seems a bit of a surprise that a compiled language delivers results slightly less quickly than PHP... Did you find any magic knobs to twist to get performance up there with gmail? Cheers Ed W
Re: [Dovecot] Lock down Shared Mail Accounts?
I want to give multiple people shared access to some actual accounts with all of the special use folders, with the following requirements: I have done this (unsatisfactorarily) by making it a normal mail account with normal login credentials. Add it like any other mail account. It then satisfies all your requirements, although: behind a nat, on thunderbird and with condstore, I sometimes see read/unread get out of sync... Believed to be a thunderbird bug, but unsure. Easy to resync 5. No one other than a designated user or users (Master User(s)? Users in a specified Group?) can delete any messages in this account, in any of the folders. Have them delivered with only read permissions on the physical files? (Bet that doesn't work very well in practice or other than maildir...) Interested to hear proper answers... Ed W
Re: [Dovecot] Post-login scripting - Trash cleanup
On 28/02/2012 18:11, l...@airstreamcomm.net wrote: We are considering using the post-login scripting to clear trash older than 90 days from user accounts. has anyone done this, and if so did this cause logins to slow down too much waiting for the trash to purge? One idea was to execute the trash purge script once a day by tracking their logins and seeing that it has already ran that day. Another idea was to call the trash purge script in the background and continue without acknowledging that it has finished to keep logins speedy. I think you can also use doveadm to achieve this? So you could schedule something for all accounts at some out of hours period - should speed up backups also? Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 27/02/2012 08:34, Timo Sirainen wrote: On Thu, 2012-02-23 at 01:41 +0200, Timo Sirainen wrote: What do you need the statistics for? I could make imap_client and pop3_client support some virtual methods, like user.destroy() initially, which would be enough for your use. I guess I could add that for v2.2. http://dovecot.org/patches/2.2/imap-logout-plugin.c Thanks - can I assume that a pop-logout would be basically the same? Also, how might I access the bytes in/out statistics from that context? Thanks Ed W
Re: [Dovecot] remove messages once downloaded
On 25/02/2012 00:39, Timo Sirainen wrote: On 24.2.2012, at 19.44, julio...@fisica.uh.cu wrote: I need some help with the dovecot configuration. I want to remove downloaded messages from Mail Server once the messages have been successfully downloaded by pop3-clients, even when the clients have been configured to save copy of messages in the Server. Not possible. If you were thinking about longer term TODOs then I have a similar problem (just adding a me too...) In my industry, competing solutions offer a kind of server side been downloaded notification when customers have actually downloaded (ie read via POP) the message. The customers are all on the far side of expensive satellite links, so this serves as an inexpensive proxy for message read notifications. Is it feasible to implement both of these solutions using the current plugin architecture? I think our competition implement such features because they are Exchange based and I believe you can write server side hooks in various scripting languages quite easily (I personally don't like the idea, but someone obviously did it once and it rolled from there...) - this obviously harking to the is it feasible to imagine some higher level hook solution for simpler plugin creation suggestion from a few days ago? All these do something when it's accessed or do something when it's deleted problems all feel kind of related to me (ie we need some hook which runs on a per message basis). Perhaps someone smarter than me can think of a better way to unify them? Cheers Ed W
Re: [Dovecot] remove messages once downloaded
On 26/02/2012 12:31, Timo Sirainen wrote: On 26.2.2012, at 13.52, Ed W wrote: On 25/02/2012 00:39, Timo Sirainen wrote: On 24.2.2012, at 19.44, julio...@fisica.uh.cu wrote: I need some help with the dovecot configuration. I want to remove downloaded messages from Mail Server once the messages have been successfully downloaded by pop3-clients, even when the clients have been configured to save copy of messages in the Server. Not possible. If you were thinking about longer term TODOs then I have a similar problem (just adding a me too...) In my industry, competing solutions offer a kind of server side been downloaded notification when customers have actually downloaded (ie read via POP) the message. The customers are all on the far side of expensive satellite links, so this serves as an inexpensive proxy for message read notifications. What does the notification do? Sends another email... (you know like the annoying message read indicators that lots of mail readers support)... (Several of our competitors have implemented these solutions very badly and we get mail loops and other nasties...) I think our competition implement such features because they are Exchange based and I believe you can write server side hooks in various scripting languages quite easily (I personally don't like the idea, but someone obviously did it once and it rolled from there...) - this obviously harking to the is it feasible to imagine some higher level hook solution for simpler plugin creation suggestion from a few days ago? All these do something when it's accessed or do something when it's deleted problems all feel kind of related to me (ie we need some hook which runs on a per message basis). Perhaps someone smarter than me can think of a better way to unify them? Dovecot has a notify plugin that makes things like this pretty easy to implement, but it still needs C coding. Thanks - it's off my radar for a while due to other pressures, but the hint is appreciated and I will look into it in the future - many thanks! Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 22/02/2012 23:56, Ed W wrote: I think it has potential though. I think a lot of the current plugins on the website could easily be rewritten, likely without performance concerns, using a scripting based plugin system. I could see that some other big picture pieces could potentially benefit also One interesting test case for such a scripting hooks solution might be login restrictions. There seem to be regular requests for the ability to setup arbitrarily complicated restrictions on users per IP, attempts per second, etc (and my logging interest is kind of related also). Not trying to bump the item up any todo lists, just trying to chuck in some concrete ideas for actually testing a specific implementation... I guess a substantially more performance orientated area that seems to get some interest would be various spam, expunge, delete ideas and the hooks needed for those. These seem much more tricky to implement a scripting hook and still stay performant. Again just ideas for real things people might want to do? Cheers Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 22/02/2012 08:25, Jan-Frode Myklebust wrote: On Tue, Feb 21, 2012 at 02:33:24PM +, Ed W wrote: I think the original question was still sensible. In your case it seems like the ping times are identical between: webmail - imap-proxy webmail - imap server I think your results show that a proxy has little (or negative) benefit in this situation, but it seems feasible that a proxy could eliminate several RTT trips in the event that the proxy is closer than the imap server? This might happen if say the imap server is in a different datacenter (webmail on an office server machine?) The webmail/imapproxy were actually running in a different datacenter to the dovecot director/backend servers, but only about 20KM away. Ping tests: webmail-director: rtt min/avg/max/mdev = 0.933/1.061/2.034/0.183 ms director-backend: rtt min/avg/max/mdev = 0.104/0.108/0.127/0.005 ms webmail-localhost: rtt min/avg/max/mdev = 0.020/0.062/1.866/0.257 ms -jf Hmm, not sure I understand the original numbers then? It seems intuitive that the proxy installed locally could save you 2x RTT increment, which is about 0.8ms in your case. So I might expect the proxy to reduce rendering times by around 1.6ms simply because it reduces the number of round trips to login? Kind of curious why that's not achieved..? Cheers Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 21/02/2012 20:36, Timo Sirainen wrote: On 21.2.2012, at 16.33, Ed W wrote: I'm also pleased to see that there is little negative cost in using a proxy... I recently added imap-proxy to our webmail setup because I wanted to log last login + logout times. I haven't quite figured out how to best log logout time (Timo, any chance of a post logout script? Or perhaps it's possible with the current login scripting?). You could of course grep the logs, but other than that you'd need to write a Dovecot plugin. Luckily it's really simple to write a plugin. Basically: void postlogout_init(struct module *module) { } void postlogout_deinit(void) { system(/usr/local/bin/dovecot-postlogout.sh); } Add a few missing #includes and compile and enable for imap/pop3 and that should be it. Thanks - that's really obvious and quite interesting. I guess a simple log plugin makes sense. Quick followup question - the logout log file currently logs a bunch of statistics such as mails read/deleted, bytes sent/received. How might I access these from the _deinit context as above? Apologies if this is a RTFM question? Finally, do you see it feasible to offer a scriptable plugin interface, eg perhaps using some high performance scripting language such as lua? Such a plugin might itself be simply a standard plugin..? The motivation being to offer the ability to create plugins to those who are nervous of using a compiler, and of course to reduce the ability of a badly written plugin to kill dovecot? Cheers Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 22/02/2012 19:49, Timo Sirainen wrote: On 22.2.2012, at 11.38, Ed W wrote: void postlogout_init(struct module *module) { } void postlogout_deinit(void) { system(/usr/local/bin/dovecot-postlogout.sh); } Add a few missing #includes and compile and enable for imap/pop3 and that should be it. Thanks - that's really obvious and quite interesting. I guess a simple log plugin makes sense. Quick followup question - the logout log file currently logs a bunch of statistics such as mails read/deleted, bytes sent/received. How might I access these from the _deinit context as above? Apologies if this is a RTFM question? You'd have to build separate plugins for POP3 and IMAP, and even then it becomes tricky since there's no simple hook for catching when client gets destroyed. Do you think you could keep something similar on your low priority backlog? Clearly parsing log files or hacking the code is possible, but I think the interest in the login scripting shows there is general interest, and having a full log of logon/logoff/bytes is clearly interesting to more minority users? Finally, do you see it feasible to offer a scriptable plugin interface, eg perhaps using some high performance scripting language such as lua? Such a plugin might itself be simply a standard plugin..? The motivation being to offer the ability to create plugins to those who are nervous of using a compiler, and of course to reduce the ability of a badly written plugin to kill dovecot? I've been thinking about adding a scripting language plugin to Dovecot. Perhaps even using one of the existing generators that are supposed to make this easy for multiple languages, such as SWIG. But this is pretty low priority currently.. I think SWIG is for wrapping dovecot's api into the scripting language? (ie you could call dovecot methods from say perl/python/etc). What I had in mind was the reverse, ie embed LUA inside dovecot. Whenever dovecot normally calls a plugin method it will also run any [lua] scripts. I'm sure you know how to use google, but just so we are on the same page, top hit (below) from google shows how straight forward this is (lua has been built to be extremely fast and easy to embed, ie it's not an arbitrary choice) http://heavycoder.com/tutorials/lua_embed.php Cheers Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 22/02/2012 23:41, Timo Sirainen wrote: I've heard LUA being a commonly used embedded language, but I'd prefer to instead support several very widely used languages, such as Perl/Python. I'm a perl/ruby fan myself, but I would still recommend a good look at lua (or python) simply because they seem to be performant, easy to use, and on the surface seem to have had some thought about making them embeddable. My new favourite editor Sublime Text 2 has python as it's scripting language. Lua has been used for some big name games amongst other things. Perl has some memory management issues if you leave it long running, also writing XS code looks ok on the surface, but is an exercise in hair pulling in practice Ruby is a beautiful language, but unsure of how easy to embed and speed + memory management is an unknown (for high performance applications) I think it has potential though. I think a lot of the current plugins on the website could easily be rewritten, likely without performance concerns, using a scripting based plugin system. I could see that some other big picture pieces could potentially benefit also Thanks for considering it Ed W
Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend
On 13/02/2012 19:43, Jan-Frode Myklebust wrote: On Mon, Feb 13, 2012 at 11:08:48AM -0800, Mark Moseley wrote: Out of curiosity, are you running dovecot locally on those webmail servers as well, or is it talking to remote dovecot servers? The webmail servers are talking with dovecot director servers which in turn are talking with the backend dovecot servers. Each service running on different servers. Webmail-servers - director-servers - backend-servers I think the original question was still sensible. In your case it seems like the ping times are identical between: webmail - imap-proxy webmail - imap server I think your results show that a proxy has little (or negative) benefit in this situation, but it seems feasible that a proxy could eliminate several RTT trips in the event that the proxy is closer than the imap server? This might happen if say the imap server is in a different datacenter (webmail on an office server machine?) I'm also pleased to see that there is little negative cost in using a proxy... I recently added imap-proxy to our webmail setup because I wanted to log last login + logout times. I haven't quite figured out how to best log logout time (Timo, any chance of a post logout script? Or perhaps it's possible with the current login scripting?). However, using imap-proxy has the benefit of clustering logins a little and this makes log files a little easier to understand in the face of users with desktop mail clients plus webmail users. Possibly this idea useful to someone else... Thanks for measuring this! Ed W
Re: [Dovecot] IMAP to Maildir Migration preserving UIDs?
Hi Sounds very cool. I already have dovecot set up as a proxy, working, and it should allow me to forcefully disconnect users and lock them out while they are being migrated and then once they are done they'll be served locally rather than proxied. My main problem is that most connections are simply coming directly to the old server, using the deprecated hostname. I need all clients to use the right hostnames, or clog up this new server with redirectors and proxies for all the junk done on the old server.. bummer. Why not put the old server IP to redirect to the new machine, then give the old machine some new temp IP in order to proxy back to it? That way you can do the proxying on the dovecot machine, which as you already established is working ok? Good luck Ed W
Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection
On 26/01/2012 14:37, Mark Zealey wrote: I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Could it be a *timeout* rather than lack of worker processes? Theory would be that disk starvation causes other processes to take a long time to respond, hence the worker is *alive*, but doesn't return a response quickly enough, which in turn causes the unknown user message? You could try a different disk io scheduler, or ionice to control the effect of these big bursts of disk activity on other processes? (Most MTA programs such as postfix and qmail do a lot of fsyncs - this will cause a lot of IO activity and could easily starve other processes on the same box?) Good luck Ed W
Re: [Dovecot] IMAP to Maildir Migration preserving UIDs?
Hi Yeap, taht's what I'm doing to do, except that I would have to proxy more than just IMAP and POP - it's a one-does-it-all kind of machine accepting mail delivered from the outside, relaying outgoing mail, does webmail, does all this things very poorly... I have the choice of forcing all users to change to the new, dedicated servers doing these things, or reimplementing / porxying all of this on my new dovecot server which I so desperately want to keep neat and tidy... In that case I would suggest perhaps that the IP is taken over by a dedicated firewall box (running the OS of your choice). The firewall could then be used to port forward the services to the individual machines responsible for each service. This would give you the benefit that you could easily move other services off/around We are clearly off topic to dovecot... Plenty of good firewall options. If you want small, compact and low power, then you can pickup a bunch off intel compatible boards around the low couple hundred £s mark fairly easily. Run your favourite distro and firewall on them. If you hadn't seen them before, I quite like Lanner for appliances, eg: http://www.lannerinc.com/x86_Network_Appliances/x86_Desktop_Appliances For example if you added a small appliance running linux which runs that IP, then you could add intrusion detection, bounce the web traffic to the windows box (or even just certain URLs, other URLs could go to some hypothetical linux box, etc), port forwarding the mail to the new dovecot box, etc, etc. Incremental price would be surprisingly low, but lots of extra flexibility? Just a thought Good luck Ed W
[Dovecot] Password auth scheme question with mysql
Hi, I have a current auth database using mysql with a password column in plain text. The config has default_pass_scheme = PLAIN specified In preparation for a more adaptable system I changed a password entry from asdf to {PLAIN}asdf, but now auth fails. Works fine if I change it back to just asdf. (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a {?). Is this still true when using mysql and default_pass_scheme ? Thanks for any hints? Ed W
Re: [Dovecot] Password auth scheme question with mysql
On 24/01/2012 22:06, Ed W wrote: Hi, I have a current auth database using mysql with a password column in plain text. The config has default_pass_scheme = PLAIN specified In preparation for a more adaptable system I changed a password entry from asdf to {PLAIN}asdf, but now auth fails. Works fine if I change it back to just asdf. (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a {?). Is this still true when using mysql and default_pass_scheme ? Hmm, so I try: # doveadm pw -p asdf -s sha256 {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= I enter this hash into my database column, then enabling debug logging I see this in the logs: Jan 24 22:40:44 mail1 dovecot: auth: Debug: cache(d...@mailasail.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(d...@blah.com,1.2.24.129): query: SELECT NULLIF(mail_host, '1.2.24.129') as proxy, NULLIF(mail_host, '1.2.24.129') as host, email as user, password, password as pass, home userdb_home, concat(home, '/', maildir) as userdb_mail, 200 as userdb_uid, 200 as userdb_gid FROM users WHERE email = if('blah.com''','d...@blah.com','d...@blah.com@mailasail.com') and flag_active=1 Jan 24 22:40:44 mail1 dovecot: auth-worker: sql(d...@blah.com,1.2.24.129): Password mismatch (given password: {PLAIN}asdf) Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: md5_verify(d...@mailasail.com): Not a valid MD5-CRYPT or PLAIN-MD5 password Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha256_verify(d...@mailasail.com): SSHA256 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha512_verify(d...@mailasail.com): SSHA512 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(d...@blah.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Forgot to say. this is with dovecot 2.0.17 Thanks for any pointers Ed W
Re: [Dovecot] Password auth scheme question with mysql
On 24/01/2012 22:51, Ed W wrote: Hmm, so I try: # doveadm pw -p asdf -s sha256 {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= I enter this hash into my database column, then enabling debug logging I see this in the logs: .. Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(d...@blah.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Gah. Ok, so I discovered the doveadm auth command: # doveadm auth -x service=pop3 demo asdf passdb: demo auth succeeded extra fields: user=d...@blah.com proxy host=1.2.24.129 pass={SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= So why do I get an auth failed and the log files I showed in my last email when I use telnet localhost 110 and then the commands: user demo pass asdf Help please...? Ed W
Re: [Dovecot] Password auth scheme question with mysql
On 24/01/2012 22:06, Ed W wrote: Hi, I have a current auth database using mysql with a password column in plain text. The config has default_pass_scheme = PLAIN specified In preparation for a more adaptable system I changed a password entry from asdf to {PLAIN}asdf, but now auth fails. Works fine if I change it back to just asdf. (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a {?). Is this still true when using mysql and default_pass_scheme ? Bahh. Partly figured this out now - sorry for the noise - looks like a config error on my side: I have traced this to my proxy setup, which appears not to work as expected. Basically all works fine when I test to the main server IP, but fails when I test localhost, since it triggers me to be proxied to the main IP address (same machine, just using the external IP). The error seems to be that I set the pass variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... For interest, here is my auth setup: password_query = SELECT NULLIF(mail_host, '%l') as proxy, NULLIF(mail_host, '%l') as host, \ email as user, password, \ password as pass, \ home userdb_home, concat(home, '/', maildir) as userdb_mail, \ 1234 as userdb_uid, 1234 as userdb_gid \ FROM users \ WHERE email = if('%d''','%u','%u...@mailasail.com') and flag_active=1 mail_host in this case holds the IP of the machine holding the users mailbox (hence it's easy to push mailboxes to a specific machine and the users get proxied to it) Sorry for the noise Ed W
Re: [Dovecot] Storing passwords encrypted... bcrypt?
On 05/01/2012 01:19, Pascal Volk wrote: On 01/03/2012 09:40 PM Charles Marcus wrote: Hi everyone, Was just perusing this article about how trivial it is to decrypt passwords that are stored using most (standard) encryption methods (like MD5), and was wondering - is it possible to use bcrypt with dovecot+postfix+mysql (or posgres)? Yes it is possible to use bcrypt with dovecot. Currently you have only to write your password scheme plugin. The bcrypt algorithm is described at http://en.wikipedia.org/wiki/Bcrypt. If you are using Dovecot= 2.0 'doveadm pw' supports the schemes: *BSD: Blowfish-Crypt *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt Some distributions have also added support for Blowfish-Crypt See also: doveadm-pw(1) If you are using Dovecot 2.0 you can also use any of the algorithms supported by your system's libc. But then you have to prefix the hashes with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. I'm a bit late, but the above is absolutely correct Basically the simplest solution is to pick a glibc which natively supports bcrypt (and the equivalent algorithm, but using SHA-256/512). Then you can effectively use any of these hashes in your /etc/{passwd,shadow} file. With the hash testing native in your glibc then a bunch of applications automatically acquire the ability to test passwords stored in these hash formats, dovecot being one of them To generate the hashes in that format, choose an appropriate library for your web interface or whatever generates the hashes for you. There are even command line utilities (mkpasswd) to do this for you. I forget the config knobs (/etc/logins.def ?), but it's entirely possible to also have all your normal /etc/shadow hashes generated in this format going forward if you wish I posted some patches for uclibc recently for bcrypt and I think sha-256/512 already made it in. I believe several of the big names have similar patches for glibc. Just to attack some of the myths here: - Salting passwords basically means adding some random garbage at the front of the password before hashing. - Salting passwords prevents you using a big lookup table to cheat and instantly reverse the password - Salting has very little ability to stop you bruteforcing the password, ie it takes around the same time to figure out the SHA or blowfish hash of every word in some dictionary, regardless of whether you use the raw word or the word with some garbage in front of it - Using an iterated hash algorithm gives you a linear increase in difficulty in bruteforcing passwords. So if you do a million iterations on each password, then it takes a million times longer to bruteforce (probably there are shortcuts to be discovered, assume that this is best case, but it's still a good improvement). - Bear in mind that off the shelf GPU crackers will do of the order 100-300 million hashes per second!! http://www.cryptohaze.com/multiforcer.php The last statistic should be scary to someone who has some small knowledge of the number of unique words in the [english] language, even multiplying up for trivial permutations with numbers or punctuation... So in conclusion: everyone who stores passwords in hash form should make their way in an orderly fashion towards the door if they don't currently use an iterated hash function. No need to run, but it definitely should be on the todo list to apply where feasible. BCrypt is very common and widely implemented, but it would seem logical to consider SHA-256/512 (iterated) options where there is application support. Note I personally believe there are valid reasons to store plaintext passwords - this seems to cause huge criticism due to the ensuing disaster which can happen if the database is pinched, but it does allow for enhanced security in the password exchange, so ultimately it depends on where your biggest risk lies... Good luck Ed W
Re: [Dovecot] compressed mboxes very slow
On 12/01/2012 10:39, Kamil Jońca wrote: kjo...@o2.pl (Kamil Jońca) writes: I have some archive mails in gzipped mboxes. I could use them with dovecot 1.x without problems. But recently I have installed dovecot 2.0.12, and they are slow. very slow. Recently I have to read some compressed mboxes again, and no progress :( I took 2.0.17 sources and put some i_debug (#kjonca[__FILE__,%d,%s] %d, __LINE__,__func__,...some parameters ...); lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c and found that: in istream-limit.c in function around lines 40-45: --8---cut here---start-8--- i_stream_seek(stream-parent, lstream-istream.parent_start_offset + stream-istream.v_offset); stream-pos -= stream-skip; stream-skip = 0; --8---cut here---end---8--- seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) and then (line 50 ) --8---cut here---start-8--- if ((ret = i_stream_read(stream-parent)) == -2) return -2; --8---cut here---end---8--- tries to read some data earlier in stream, and with compressed mboxes it cause reread file from the beginning. Just wanted to bump this since it seems interesting. Timo do you have a comment? I definitely see your point that skipping backwards in a compressed stream is going to be very CPU intensive. Ed W
Re: [Dovecot] OT Re: crashes on 2.0.16
On 22/12/2011 11:13, Charles Marcus wrote: On 2011-12-21 11:18 PM, Simon Brereton simon.brere...@buongiorno.com wrote: It would be interesting to chart the number of threads caused by each distro. I don't know who would have the least, but I suspect gentoo and centos would be out in front, Been using gentoo since about 2003 and never looked back... best and easiest distro to maintain, bar none, and the best support and documentation too. Wait... Back up... You mean there are *other* distributions of linux? I thought they were all just gentoo derivatives..?!! :-) Ed W