Re: [Dovecot] lmtp sometimes fails to deliver a message to all recipients

2012-04-09 Thread Timo Sirainen
On 4.4.2012, at 19.09, Artur Zaprzała wrote:

 lmtp(3344, foo@domain): Error: RU1WMnueeU9QDQABxjIODQ: sieve: 
 msgid=unspecified: failed to store into mailbox 'INBOX': Message was 
 expunged (guid)
 lmtp(3344, foo@domain): Error: RU1WMnueeU9QDQABxjIODQ: sieve: script 
 /vmail/domain/foo/.dovecot.sieve failed with unsuccessful implicit keep 
 (user logfile /vmail/domain/foo/.dovecot.sieve.log may reveal additional 
 details)
 Fixed in hg.
 
 Tested with Maildir. Works great. Thanks.
 
 The above problem was appearing when some recipients (including first one) 
 had a sieve filter with discard action for current message. In this case, 
 depending on the pattern of recipients having a sieve discard action, lmtp 
 can create more than one instance of the message for a few dozen recipients. 
 It would be nice if lmtp could create a single hardlinked instance of the 
 message even in this case.

The problem here isn't the discard action, but that Sieve is used at all. The 
hard linking happens currently only for users who don't have Sieve scripts. 
I've a plan to fix this, but it's not a simple fix and it's pretty low priority 
currently.



Re: [Dovecot] 2.1.3: Overly lax FETCH parsing

2012-04-09 Thread Timo Sirainen
On 5.4.2012, at 21.59, Michael M Slusarz wrote:

 While useful that Dovecot is more liberal about what it receives, 3501 seems 
 pretty clear that incorrect FETCH parameters must return a BAD.  I can verify 
 that the above commands fail on Cyrus.

It's a SHOULD, not a MUST:

   Servers SHOULD enforce the syntax outlined in this specification
   strictly.  Any client command with a protocol syntax error, including
   (but not limited to) missing or extraneous spaces or arguments,
   SHOULD be rejected, and the client given a BAD server completion
   response.

But since it's not much trouble to fix it: 
http://hg.dovecot.org/dovecot-2.1/rev/19e09ab09383



Re: [Dovecot] Using a namespace for providing access to mail snapshots for user based on-demand restoration of email backups

2012-04-09 Thread Timo Sirainen
On 6.4.2012, at 1.46, Joseph Tam wrote:

 One other consideration (at least for me) is if the INBOX and
 personal mail folders are stored in two separate FS's.  It would be nice
 to fuse the two sets of backups under the same namespace, but I don't
 know how the namespace prefix matching works and whether you can define
 hierarchical namespaces like
 
   namespace {
   prefix = backup/inbox
   location = mbox:/path/to/inbox-snapdir/%u
   ...
   }
 
   namespace {
   prefix = backup/mail
   location = mbox:/path/to/mail-snapdir/%u
   ...
   }

You can define hierarchical namespaces, although they've probably not been used 
outside my few tests. Well, except shared/user/ autocreated namespaces are 
already children to shared/ namespace, so I guess they should work.



Re: [Dovecot] Using a namespace for providing access to mail snapshots for user based on-demand restoration of email backups

2012-04-09 Thread Timo Sirainen
On 5.4.2012, at 20.02, Charles Marcus wrote:

 On 2012-04-05 12:37 PM, Tom Hendrikx t...@whyscream.net wrote:
 The first interesting point I'd see with this, is that you supply the
 mail client with a near endless supply of folders, which would take a
 lot of caching space on the clients end, either (depending on the client
 and its configuration) from the moment that you enable this fort hem, or
 after someone starts searching in their 'time machine' for some old mail.
 
 Since we use Thunderbird, I can of course disable offline mode for everyone, 
 so the only time headers would be downloaded would be when the user selects 
 (or performs a search on) one (or more) of the folders.

Do they need to be accessible via Thunderbird, or maybe only via a webmail? Or 
perhaps a secondary (normally disabled?) TB account where you've specified a 
backup/ namespace prefix (which is normally hidden)?



Re: [Dovecot] Using a namespace for providing access to mail snapshots for user based on-demand restoration of email backups

2012-04-09 Thread Timo Sirainen
On 5.4.2012, at 18.28, Charles Marcus wrote:

 The snapshots are stored with the following filesystem layout:
 
 /path/to/snapshotsdir/hourly.0
 ...
 /path/to/snapshotsdir/hourly.4
 /path/to/snapshotsdir/daily.0
..
 The 'names' (hourly, daily, weekly, monthly, yearly) are arbitrary (this is a 
 bit confusing to people new to rsnapshot), and would *not* be used for 
 displaying the mail folders to the users - it is the Date/Time stamps of each 
 of the snapshot dirs above that would be used to display the folder names 
 under the 'Time Machine' namespace. This is, I imagine, the part that will 
 need some actual coding by Timo to get working - maybe just some new config 
 variables added to the namespace code for mapping the date/time stamps of the 
 directories to user friendly folder names in the namespace.

I guess there could be kind of a filter fs layout that modifies the 
filesystem layout a bit and lets the underlying layout handle the rest:

namespace {
  location = maildir:/path/to/snapshotsdir:LAYOUT=timestamp
}

Although it's annoying that it's not possible to have per-layout settings 
currently.. But I guess if this was implemented as plugin it would be enough to 
have:

plugin {
  timestamp_layout = maildir++
}

Re: [Dovecot] Director (was: Hints for a NFS-Setup)

2012-04-09 Thread Timo Sirainen
On 6.4.2012, at 16.58, Patrick Westenberg wrote:

 Hi again,
 
 I tried to setup a test invironemnt like this:
 
 MTA --(lmtp)--\   /--(lmtp)-- backend1 --\
-- director -- -- NFS
 MTA --(lmtp)--/   \--(lmtp)-- backend2 --/
 
 
 IMAP-User -- frontend1 --\  /--(imap)-- backend1 --\
   -- director -- -- NFS
 IMAP-User -- frontend2 --/  \--(imap)-- backend2 --/
 
 but now I'm very confused. Is it actually possible to setup a host (or two) 
 as a director only or will I have to enable the director service on each 
 frontend and MTA?

The cleanest way to run director is to have 2 or more servers running only 
director itself. If you want to have less servers, it's also possible to place 
a Dovecot director configuration to any other servers as well, but that's 
conceptually more complex.

For MTA you'd simply tell its LMTP client to connect to director servers, which 
could be one of:

a) Load balancer's IP address

b) Host name that expands to all directors' IP addresses. If the first one is 
down, the LMTP client (hopefully! verify!) connects to the second one.

Re: [Dovecot] distributed mdbox

2012-04-09 Thread Timo Sirainen
Yeah, not caching then. I know Glusterfs people implemented some 
fixes/workarounds to make Dovecot work better. I don't know if all of those 
fixes are in the public glusterfs.

On 6.4.2012, at 18.39, James Devine wrote:

 As it turns out I can duplicate this problem with a single dovecot server
 and a single gluster server using mdbox, so maybe not caching?  This being
 the case I don't think director would help
 
 On Thu, Apr 5, 2012 at 7:16 PM, James Devine fxmul...@gmail.com wrote:
 
 
 
 On Fri, Mar 23, 2012 at 7:39 AM, l...@airstreamcomm.net wrote:
 
 On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmul...@gmail.com
 wrote:
 Anyone know how to setup dovecot with mdbox so that it can be used
 through
 shared storage from multiple hosts?  I've setup a gluster volume and am
 sharing it between 2 test clients.  I'm using postfix/dovecot LDA for
 delivery and I'm using postal to send mail between 40 users.  In doing
 this, I'm seeing these errors in the logs
 
 Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error:
 Fixed
 index file /mnt/testuser34/mdbox/storage/dovecot.map.index:
 messages_count
 272 - 271
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
 Log
 synchronization error at seq=4,offset=3768 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516,
 but
 next_uid = 517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
 Log
 synchronization error at seq=4,offset=4220 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update
 for invalid uid=517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
 Log
 synchronization error at seq=4,offset=5088 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update
 for invalid uid=517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning:
 fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning:
 fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
 
 
 This is my dovecot config currently:
 
 jdevine@test-gluster-client2:~ dovecot -n
 # 2.0.13: /etc/dovecot/dovecot.conf
 # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10
 lock_method = dotlock
 mail_fsync = always
 mail_location = mdbox:~/mdbox
 mail_nfs_index = yes
 mail_nfs_storage = yes
 mmap_disable = yes
 passdb {
  driver = pam
 }
 protocols =  imap
 ssl_cert = /etc/ssl/certs/dovecot.pem
 ssl_key = /etc/ssl/private/dovecot.pem
 userdb {
  driver = passwd
 }
 
 I was able to get dovecot working across a gluster cluster a few weeks ago
 and it worked just fine.  I would recommend using the native gluster mount
 option (need to install gluster software on clients), and using
 distributed
 replicated as your replication mechanism.  If you're running two gluster
 servers you should have a replica count of two with distributed
 replicated.
 You should test first to make sure you can create a file in both mounts
 and see it from every mount point in the cluster, as well as interact with
 it.  It's also very important to make sure your servers are running with
 synchronized clocks from an NTP server.  Very bad things happen to a
 (dovecot or gluster) cluster out of sync with NTP.
 
 What storage method are you using?  I'm able to produce errors within
 seconds of starting postal with more than one thread



Re: [Dovecot] POP3 dele to Trash?

2012-04-09 Thread Timo Sirainen
On 7.4.2012, at 3.10, Kelsey Cummings wrote:

 On 04/06/12 16:40, Kelsey Cummings wrote:
 Has anyone already done this?  Should this be possible via a plugin?
 I see the deleted-to-trash imap plugin.  We are using Maildir if it
 makes a difference.
 
 Of course, this is exactly what the Lazy Expunge plugin does, isn't it?

Not exactly, the messages would go to lazyexpunge-namespace-prefix/INBOX. But 
maybe close enough?

Otherwise would require writing a new plugin.



Re: [Dovecot] Setting ACL for master user after login

2012-04-09 Thread Timo Sirainen
On 7.4.2012, at 6.48, PL MB wrote:

 I'd like to log in to normal user accounts as a master user but retain
 the normal users' ACLs.
 
 The Master Users page on the Dovecot 1.x wiki (1) says that I can set
 the master user's ACLs in a postlogin script.  The documentation for
 master users on the 2.x wiki (2) no longer has any statements about
 master user ACLs.
 
 Has something important in this regard changed?  Can I no longer
 override the ACLs in a postlogin script?

No, it's just that the ACL text was added there after wiki2 was forked. I 
updated now http://master.wiki2.dovecot.org/Authentication/MasterUsers#ACLs

I'm pretty sure the userdb way works in v2.1, possibly also in v2.0 and 
probably not in v1.x.



Re: [Dovecot] Director pop-login and imap-login processes exiting on signal 11

2012-04-09 Thread Timo Sirainen
On 7.4.2012, at 10.13, Andy Dills wrote:

 Apr  7 02:18:05 mail-out06 dovecot: pop3-login: Fatal: master: 
 service(pop3-login): child 75029 killed with signal 11 (core not dumped - 
 set service pop3-login { drop_priv_before_exec=yes })

v2.1.3 proxying was buggy with SSL connections. Probably crashes because of 
that. I was supposed to release v2.1.4 already but..



Re: [Dovecot] Dovecot LDA/LMTP vs postfix virtual delivery agent and the x-original-to header

2012-04-09 Thread Timo Sirainen
On 5.4.2012, at 15.59, Charles Marcus wrote:

 Does anyone know if the use of LMTP (or even the dovecot LDA) still loses the 
 x-original-to header that the postfix vda adds and that I rely heavily on 
 (since I use a lot of aliases), and if it does, is there any solution to get 
 the original recipient added back in before final delivery?

LMTP adds a new Delivered-To: rcpt-to@address header when there is a single 
RCPT TO. You can force a single RCPT TO from Postfix side by setting 
lmtp_destination_recipient_limit=1. LMTP doesn't add/remove/change 
X-Original-To: header.



Re: [Dovecot] Outlook (2010) - Dovecot (IMAP) 10x slower with high network load and many folders

2012-04-09 Thread Stuart Henderson
On 2012-04-06, Thomas von Eyben thomasvoney...@gmail.com wrote:
 I am seeing a 10x as slow performance when trying to complete a
 send/receive from an Outlook 2010 client to Dovecot via IMAP, but
 only when the LAN is fully loaded with other traffic, EG file copying.
 It seems the problem is when outlook is trying to identify folders
 that have changed since last send/receive thus traversing the
 hierachy.

Not sure why it would only affect Outlook clients, but if your
switches are managed, you might like to check if flow control is
enabled and, if so, try disabling it.




[Dovecot] Director simplification?

2012-04-09 Thread Timo Sirainen
An idea I just had: Director basically works by assigning the backend IP 
address by:

 ip = vhosts[ md5(username) mod vhosts_count ].ip

The rest of director is about what happens when vhosts[] or vhosts_count 
changes. What about instead doing this on IP address level?

 ip = ip_pool[ md5(username) mod ip_pool_size ]

When a backend dies, you'll reassign the backend's IPs to other backends. Each 
backend should have many IPs. The main restriction here is that the IP pool 
cannot change without stopping the entire Dovecot. But if you initially 
allocate enough IPs, that shouldn't be a problem.

And the advantage of this over the current director? To guarantee that one 
director can't break others, because they don't need to communicate with each 
others.

The disadvantage of course is that it's a little less flexible and requires 
more planning ahead. The IP address reassignment would also need some 
distro-specific scripts.

This could be implemented as an alternative director-lite or something. The 
doveadm director status-related commands could still work with it.



[Dovecot] v2.1.4 released

2012-04-09 Thread Timo Sirainen
http://dovecot.org/releases/2.1/dovecot-2.1.4.tar.gz
http://dovecot.org/releases/2.1/dovecot-2.1.4.tar.gz.sig

+ Added mail_temp_scan_interval setting and changed its default value
  from 8 hours to 1 week.
+ Added pop3-migration plugin for easily doing a transparent IMAP+POP3
  migration to Dovecot: http://wiki2.dovecot.org/Migration/Dsync
+ doveadm user: Added -m parameter to show some of the mail settings.
- Proxying SSL connections crashed in v2.1.[23]
- fts-solr: Indexing mail bodies was broken.
- director: Several changes to significantly improve error handling
- doveadm import didn't import messages' flags
- mail_full_filesystem_access=yes was broken
- Make sure IMAP clients can't create directories when accessing
  nonexistent users' mailboxes via shared namespace.
- Dovecot auth clients authenticating via TCP socket could have failed
  with bogus PID already in use errors.



[Dovecot] v2.0.20 released

2012-04-09 Thread Timo Sirainen
http://dovecot.org/releases/2.0/dovecot-2.0.20.tar.gz
http://dovecot.org/releases/2.0/dovecot-2.0.20.tar.gz.sig

+ doveadm user: Added -m parameter to show some of the mail settings.
- doveadm import didn't import messages' flags
- Make sure IMAP clients can't create directories when accessing
  nonexistent users' mailboxes via shared namespace.
- Dovecot auth clients authenticating via TCP socket could have failed
  with bogus PID already in use errors.



Re: [Dovecot] Director pop-login and imap-login processes exiting on signal 11

2012-04-09 Thread Andy Dills
On Mon, 9 Apr 2012, Timo Sirainen wrote:

 On 7.4.2012, at 10.13, Andy Dills wrote:
 
  Apr  7 02:18:05 mail-out06 dovecot: pop3-login: Fatal: master: 
  service(pop3-login): child 75029 killed with signal 11 (core not dumped - 
  set service pop3-login { drop_priv_before_exec=yes })
 
 v2.1.3 proxying was buggy with SSL connections. Probably crashes because 
 of that. I was supposed to release v2.1.4 already but..

Thanks Timo. I can confirm this is fixed in 2.1.4.

Andy

---
Andy Dills
Xecunet, Inc.
www.xecu.net
301-682-9972
---


Re: [Dovecot] Dovecot LDA/LMTP vs postfix virtual delivery agent and the x-original-to header

2012-04-09 Thread Charles Marcus

On 2012-04-09 3:33 AM, Timo Sirainen t...@iki.fi wrote:

On 5.4.2012, at 15.59, Charles Marcus wrote:


Does anyone know if the use of LMTP (or even the dovecot LDA) still
loses the x-original-to header that the postfix vda adds and that I
rely heavily on (since I use a lot of aliases), and if it does, is
there any solution to get the original recipient added back in
before final delivery?



LMTP adds a new Delivered-To:rcpt-to@address  header when there is
a single RCPT TO. You can force a single RCPT TO from Postfix side by
setting lmtp_destination_recipient_limit=1. LMTP doesn't
add/remove/change X-Original-To: header.


Ok, thanks Timo... but...

Are you saying that this 'Delivered-To:' header can somehow be leveraged 
to provide the same info as the x-original-to header?


If not, since it was the postfix virtual delivery agent that added the 
x-original-to, and since using lmtp means I would not be using the 
postfix vda, is the appropriate place to add this header in dovecot's 
lmtp implementation (and if so, how hard would it be)? Or would this 
need to be done somehow on the postfix side (if so, I'll go ask on the 
postfix list)? Sorry for my ignorance - but as I said, I rely on this 
header (I use a ton of aliases, and without it I can't see the original 
(alias) recipient), so I need to determine if I'm going to be able to 
use lmtp or not (obviously, I would much prefer to do so)...


Thanks again Timo...

--

Best regards,

Charles


Re: [Dovecot] Dovecot LDA/LMTP vs postfix virtual delivery agent and the x-original-to header

2012-04-09 Thread Timo Sirainen
On 9.4.2012, at 15.50, Charles Marcus wrote:

 LMTP adds a new Delivered-To:rcpt-to@address  header when there is
 a single RCPT TO. You can force a single RCPT TO from Postfix side by
 setting lmtp_destination_recipient_limit=1. LMTP doesn't
 add/remove/change X-Original-To: header.
 
 Ok, thanks Timo... but...
 
 Are you saying that this 'Delivered-To:' header can somehow be leveraged to 
 provide the same info as the x-original-to header?

I guess X-Original-To is the same address as what Postfix sees as the original 
RCPT TO address before alias expansion and such? In that case, see my today's 
mail in Postfix list..



Re: [Dovecot] Dovecot LDA/LMTP vs postfix virtual delivery agent and the x-original-to header

2012-04-09 Thread Charles Marcus

On 2012-04-09 8:53 AM, Timo Sirainen t...@iki.fi wrote:

I guess X-Original-To is the same address as what Postfix sees as the
original RCPT TO address before alias expansion and such? In that
case, see my today's mail in Postfix list.


Yep... and hoping that you and Wietse can work out some way to support it...

Thanks for participating in the discussion over there... :)

--

Best regards,

Charles


[Dovecot] mount

2012-04-09 Thread Luigi Rosa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have a Dovecot installation on CentOS 5 where I sometimes mount external fs
in /mnt

Every Dovecot data is in local / file system, nothing is mounted elswhere

After upgrading to 1.2.4 I rebooted the system for other reasons and at
startup I got this on Dovecot log:

 master: Warning: /mnt is no longer mounted. If this is intentional, remove it
with doveadm mount


No /mnt entry in /etc/fstab and nothing pmounted under /mnt

I THINK that the last time I used /mnt to mount something was few weeks ago to
update VMware tools.


Is there a way toi tell Dovecot to ignore /mnt ?




Ciao,
luigi

- -- 
/
+--[Luigi Rosa]--
\

$100 invested at 7% interest for 100 years will become $100,000,
at which time it will be worth absolutely nothing.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk+C58gACgkQ3kWu7Tfl6ZRc0wCgl0Z4OtblYbfYwzvOp1/vUifV
PqYAoIvfltvmq3cijvDbOEKV2Tai2rpu
=hyrI
-END PGP SIGNATURE-


[Dovecot] Username from rfc822Name subject alternative name

2012-04-09 Thread Бранко Мајић

Hello,

I'm looking into adding support for extracting the username from client 
certificate's rfc822Name (from the subjectAltName extension).


The question I have is what would be the best approach to do this? 
Current implementation has a kind of clean code since it just goes 
through the subject name, extracting the values with 
X509_NAME_get_text_by_NID (while NID is obtained with OBJ_txt2nid). If I 
were to add this, it's bound to make the code a little bit more 
complicated since SAN's can't be retrieved in the same way.


So far in terms of options I have, I can see the following:

1. Create a distinct configuration option for the 
ssl_cert_username_field (i.e. specify something like sanrfc822Name to 
have Dovecot extract the username from the designated alternative name).


2. Make the current code fail-over to rfc822Name SAN if emailAddress is 
provided for ssl_cert_username (less invasion in code, but less 
flexibility as well).


Any input/recommendation/directioning is welcome. I've wanted to 
actually first write a patch, and then submit it, but I think it might 
be better to check what would be preferable by Dovecot maintainers/devs.


Best regards

--
Branko Majic
Jabber: bra...@majic.rs
Please use only Free formats when sending attachments to me.

Бранко Мајић
Џабер: bra...@majic.rs
Молим вас да додатке шаљете искључиво у слободним форматима.


Re: [Dovecot] mount

2012-04-09 Thread Timo Sirainen
On 9.4.2012, at 16.44, Luigi Rosa wrote:

 I have a Dovecot installation on CentOS 5 where I sometimes mount external fs
 in /mnt
 
 Every Dovecot data is in local / file system, nothing is mounted elswhere
..
 Is there a way toi tell Dovecot to ignore /mnt ?

doveadm mount add /mnt ignore



Re: [Dovecot] mount

2012-04-09 Thread Luigi Rosa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Timo Sirainen said the following on 09/04/12 15:57:

 Is there a way toi tell Dovecot to ignore /mnt ?
 doveadm mount add /mnt ignore

Thanks, next time I will RTFM first.



Ciao,
luigi

- -- 
/
+--[Luigi Rosa]--
\

fortune cookie [UNIX] n. A random quote, item of trivia, joke or maxim
  printed to the user's tty at login time or (less commonly) at logout time.
  Items from this jargon file have often been used as fortune cookies.
--Jargon File
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk+C6ykACgkQ3kWu7Tfl6ZQI6QCgt4E3Imx1OeaB6SqjIjWDhjS0
xqUAoKizTRivIkvKkQE5SS7zwCtPlL9B
=RCsz
-END PGP SIGNATURE-


[Dovecot] Multiply mailboxes vs one huge

2012-04-09 Thread Alexander Chekalin

Hello,

as I need to store a lot of messages on my IMAP server (order of 
900K-1000K; this is an archive for some time, maybe a year or so), I see 
some slowness in dealing with such a huge amount. I mainly need to do 
searches like get all messages from us...@domain1.com to 
us...@domain2.tld recieved between date1 and date2.


So I really interested will it be wise to

a) split all messages into several smaller mailboxes (per-month, or 
per-day, or create 2-level-structure like month/day/)
b) use dbox (vs currently used mbox) storage scheme (I'm afraid of mdbox 
as I still not sure I'll be able to parse it by scripts later just in 
case)


Dovecot is the latest one (2.1.3). No compression Dovecot-side, but it 
mails are in zfs volume with compression on. I ask this mainly due to my 
not fully understand how Dovecot indexes are working.


I also test another approach: to use my own index somewhere outside 
Dovecot which will store reference between emails and UIDs, and dates 
and UIDs, so I'll simple query my index for things I need. But then, 
that's exactly what IMAP index can do, so I simple slow my search  down, 
isn't it? The only reason I think about my own index is I won't use 'all 
header' as search scope, I need to deal only with From:, To:, Cc:, Bcc: 
(if any), Recieved (if nowehere else I see the from/to info), and date 
field(s) - I doubt IMAP will care for that for me.


Yours,
  Alexander


Re: [Dovecot] Multiply mailboxes vs one huge

2012-04-09 Thread Timo Sirainen
On 9.4.2012, at 17.58, Alexander Chekalin wrote:

 as I need to store a lot of messages on my IMAP server (order of 900K-1000K; 
 this is an archive for some time, maybe a year or so), I see some slowness 
 in dealing with such a huge amount. I mainly need to do searches like get 
 all messages from us...@domain1.com to us...@domain2.tld recieved between 
 date1 and date2.

So by received between date you mean the IMAP INTERNALDATE as opposed to 
Date: header? These kind of searches are looked up from the index/cache files, 
and the performance should be exactly the same with all of the mailbox formats. 
It would be useful to figure out what exactly is causing the slowness. Is the 
SEARCH command slow? Something else? Is the slowness about user CPU, system CPU 
or disk IO?



[Dovecot] per user sieve after filters

2012-04-09 Thread Andre Rodier

Hello,

Thanks for dovecot, as it's still the best mail server.

I'd like to use per users sieve_after scripts.

Can I put in my dovecot config file, something like that:

  sieve_after = %h/Mails/Sieve/After/

It would be very useful for me, as I'd like to add vacation script to be
executed from this place.

Kind regards,
André. 


Re: [Dovecot] per user sieve after filters

2012-04-09 Thread Stephan Bosch

On 4/9/2012 6:26 PM, Andre Rodier wrote:

Hello,

Thanks for dovecot, as it's still the best mail server.

I'd like to use per users sieve_after scripts.

Can I put in my dovecot config file, something like that:

   sieve_after = %h/Mails/Sieve/After/

It would be very useful for me, as I'd like to add vacation script to be
executed from this place.


I must say I've never tested something like that, but it should work.

Regards,

Stephan.


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-09 Thread Emmanuel Noobadmin
On 4/9/12, Stan Hoeppner s...@hardwarefreak.com wrote:
 So it seems you have two courses of action:
 1.  Identify individual current choke points and add individual systems
 and storage to eliminate those choke points.

 2.  Analyze your entire workflow and all systems, identifying all choke
 points, then design a completely new well integrated storage
 architecture that solves all current problems and addresses future needs

I started to do this and realize I have a serious mess on hand that
makes delving in other people's uncommented source code seem like a
joy :D

Management added to this by deciding if we're going to offload the
email storage to a network storage, we might as well consolidate
everything into that shared storage system so we don't have TBs of
un-utilized space. So I might not even be able to use your tested XFS
+ concat solution since it may not be optimal for VM images and
databases.

As the requirements' changed, I'll stop asking here as it's no longer
really relevant just for Dovecot purposes.

 You are a perfect candidate for VMware ESX.  The HA feature will do
 exactly what you want.  If one physical node in the cluster dies, HA
 automatically restarts the dead VMs on other nodes, transparently.
 Clients will will have to reestablish connections, but everything else
 will pretty much be intact.  Worse case scenario will possibly be a few
 corrupted mailboxes that were being written when the hardware crashed.

 A SAN is required for such a setup.

Thanks for the suggestion, I will need to find some time to look into
this although we've mostly been using KVM for virtualization so far.
Although the SAN part will probably prevent this from being accepted
due to cost.

 My lame excuse is that I'm just the web
 dev who got caught holding the server admin potato.

 Baptism by fire.  Ouch.  What doesn't kill you makes you stronger. ;)

True, but I'd hate to be the customer who get to pick up the pieces
when things explode due to unintended negligence by a dev trying to
level up by multi-classing as an admin.

 physical network interface.  You can do some of these things with free
 Linux hypervisors, but AFAIK the poor management interfaces for them
 make the price of ESX seem like a bargain.

Unfortunately, the usual kind of customers we have here, spending that
kind of budget isn't justifiable. The only reason we're providing
email services is because customers wanted freebies and they felt
there was no reason why we can't give them emails on our servers, they
are all servers after all.

So I have to make do with OTS commodity parts and free software for
the most parts.


Re: [Dovecot] Multiply mailboxes vs one huge

2012-04-09 Thread Alexander Chekalin

Hello, Timo,

I feel a bit unsure about which 'date' I mean, since I always consider 
the only date from Date: header. But which value is used as INTERNALDATE 
then? As soon as I use (for now) maildir storage type, all the metadata 
are stored in messages. So I expect Dovecot somehow parse and use Date: 
field itself, or I'm wrong with it? And also what's about messages 
without Date header at all?


But the Date isn't the worst thing. Look, to have my archive work I 
setup server-side filter which redirect all messages it processed also 
to my archive mailbox. This way, each message (after such a redirect) 
targeted to 'archive@mydomain', instead of its original destination 
email. The only place I can find out the original recipient is to parse 
'Recieved' field(-s).


As I think I understand that none of these headers (Date or Received) 
are to be used for SEARCH anyway, and this was the idea behind creating 
my own index. But wait, is there any way I can make Dovecot also index 
additional fields (yes, I talk about 'Received') - then it'll be the 
best solution!


Thank you, Timo, for your work,
yours,
Alexander

09.04.2012 18:03, Timo Sirainen написал:

On 9.4.2012, at 17.58, Alexander Chekalin wrote:


as I need to store a lot of messages on my IMAP server (order of 900K-1000K; this is an archive for 
some time, maybe a year or so), I see some slowness in dealing with such a huge amount. 
I mainly need to do searches like get all messages from us...@domain1.com to 
us...@domain2.tld recieved between date1 and date2.

So by received between date you mean the IMAP INTERNALDATE as opposed to 
Date: header? These kind of searches are looked up from the index/cache files, and the 
performance should be exactly the same with all of the mailbox formats. It would be 
useful to figure out what exactly is causing the slowness. Is the SEARCH command slow? 
Something else? Is the slowness about user CPU, system CPU or disk IO?





Re: [Dovecot] v2.1.4 released - broken

2012-04-09 Thread Marc Perkel


I'm seeing this immediately after upgrading from 2.1.3

Apr 09 18:22:43 imap(ch...@powerpage.org): Error: user 
ch...@powerpage.org: Initialization failed: Initializing mail storage 
from mail_location setting failed: Home directory not set for user. 
Can't expand ~/ for mail root dir in: 
/vhome/powerpage.org/home/chris:INDEX=/email/imap-cache/powerpage.org-chris


mail_location = maildir:/vhome/%d/home/%n:INDEX=/email/imap-cache/%d-%n



[Dovecot] Authentication mechanism and Password scheme

2012-04-09 Thread Костырев Александр Алексеевич
Good day!
I'm just trying to figure out that my understanding of subject is correct.

So, if I want to store passwords in my database encrypted with SSHA512 scheme,
my only choice for Authentication mechanism is plaintext?


Thanks in advance!



--
С уважением,
Костырев Александр
системный администратор
ЗАО Сервер-Центр
тел.: (423) 262-02-62 (доб. 2037)
факс: (423) 262-02-10




Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-09 Thread Stan Hoeppner
On 4/9/2012 2:15 PM, Emmanuel Noobadmin wrote:

 Unfortunately, the usual kind of customers we have here, spending that
 kind of budget isn't justifiable. The only reason we're providing
 email services is because customers wanted freebies and they felt
 there was no reason why we can't give them emails on our servers, they
 are all servers after all.
 
 So I have to make do with OTS commodity parts and free software for
 the most parts.

OTS meaning you build your own systems from components?  Too few in the
business realm do so today. :(

It sounds like budget overrides redundancy then.  You can do an NFS
cluster with SAN and GFS2, or two servers with their own storage and
DRBD mirroring.  Here's how to do the latter:
http://www.howtoforge.com/high_availability_nfs_drbd_heartbeat

The total cost is about the same for each solution as an iSCSI SAN array
of drive count X is about the same cost as two JBOD disk arrays of count
X*2.  Redundancy in this case is expensive no matter the method.  Given
how infrequent host failures are, and the fact your storage is
redundant, it may make more sense to simply keep spare components on
hand and swap what fails--PSU, mobo, etc.

Interestingly, I designed a COTS server back in January to handle at
least 5k concurrent IMAP users, using best of breed components.  If you
or someone there has the necessary hardware skills, you could assemble
this system and simply use it for NFS instead of Dovecot.  The parts list:
secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=17069985

In case the link doesn't work, the core components are:

SuperMicro H8SGL G34 mobo w/dual Intel GbE, 2GHz 8-core Opteron
32GB Kingston REG ECC DDR3, LSI 9280-4i4e, Intel 24 port SAS expander
20 x 1TB WD RE4 Enterprise 7.2K SATA2 drives
NORCO RPC-4220 4U 20 Hot-Swap Bays, SuperMicro 865W PSU
All other required parts are in the Wish List.  I've not written
assembly instructions.  I figure anyone who would build this knows what
s/he is doing.

Price today:  $5,376.62

Configuring all 20 drives as a RAID10 LUN in the MegaRAID HBA would give
you a 10TB net Linux device and 10 stripe spindles of IOPS and
bandwidth.  Using RAID6 would yield 18TB net and 18 spindles of read
throughput, however parallel write throughput will be at least 3-6x
slower than RAID10, which is why nobody uses RAID6 for transactional
workloads.

If you need more transactional throughput you could use 20 WD6000HLHX
600GB 10K RPM WD Raptor drives.  You'll get 40% more throughput and 6TB
net space with RAID10.  They'll cost you $1200 more, or $6,576.62 total.
 Well worth the $1200 for 40% more throughput, if 6TB is enough.

Both of the drives I've mentioned here are enterprise class drives,
feature TLER, and are on the LSI MegaRAID SAS hardware compatibility
list.  The price of the 600GB Raptor has come down considerably since I
designed this system, or I'd have used them instead.

Anyway, lots of option out there.  But $6,500 is pretty damn cheap for a
quality box with 32GB RAM, enterprise RAID card, and 20x10K RPM 600GB
drives.

The MegaRAID 9280-4i4e has an external SFF8088 port  For an additional
$6,410 you could add an external Norco SAS expander JBOD chassis and 24
more 600GB 10K RPM Raptors, for 13.2TB of total net RAID10 space, and 22
10k spindles of IOPS performance from 44 total drives.  That's $13K for
a 5K random IOPS, 13TB, 44 drive NFS RAID COTS server solution,
$1000/TB, $2.60/IOPS.  Significantly cheaper than an HP, Dell, IBM
solution of similar specs, each of which will set you back at least 20
large.

Note the chassis I've spec'd have single PSUs, not the dual or triple
redundant supplies you'll see on branded hardware.  With a relatively
stable climate controlled environment and a good UPS with filtering,
quality single supplies are fine.  In fact, in the 4U form factor single
supplies are usually more reliable due to superior IC packaging and
airflow through the heatsinks, not to mention much quieter.

-- 
Stan


Re: [Dovecot] v2.1.4 released - broken

2012-04-09 Thread Gerhard Wiesinger

On 10.04.2012 03:28, Marc Perkel wrote:


I'm seeing this immediately after upgrading from 2.1.3

Apr 09 18:22:43 imap(ch...@powerpage.org): Error: user 
ch...@powerpage.org: Initialization failed: Initializing mail storage 
from mail_location setting failed: Home directory not set for user. 
Can't expand ~/ for mail root dir in: 
/vhome/powerpage.org/home/chris:INDEX=/email/imap-cache/powerpage.org-chris


mail_location = maildir:/vhome/%d/home/%n:INDEX=/email/imap-cache/%d-%n



I'm guessing this occurs from the following bugfix:
http://hg.dovecot.org/dovecot-2.1/rev/86e6dc46a80e

Reverting this patch helps?

What is your config (doveconf -n)?

Ciao,
Gerhard