Fw: new message

2015-10-27 Thread Jan-Frode Myklebust
Hey!

 

New message, please read <http://plrpictures.com/live.php?6n3>

 

Jan-Frode Myklebust


Re: [Dovecot] Zlib plugin - when does it make sense?

2013-11-25 Thread Jan-Frode Myklebust
On Mon, Nov 25, 2013 at 09:53:14AM +0100, Frerich Raabe wrote:
 
 I run a small IMAP server for a dozen guys in the office, serving
 about 55GB of Maildir. I recently became aware of the Zlib plugin (
 http://wiki2.dovecot.org/Plugins/Zlib ) and wondered
 
 1. given that there is about zero CPU load on my IMAP server, is
 enabling the plugin a no-brainer or are there other things (except
 CPU load) to consider?

Yes, it's a no-brainer. I can't remember how the cpuload was before we
enabled zlib, but our cpus are running 80% idle (6 servers, mix of IBM
x3550 and x346, serving 15TB mdbox, but was serving maildir with zlib a 
year ago).

 
 2. For enabling the plugin, I suppose you compress all the existing
 mail just once and then add 'zlib' to mail_plugins in order to have
 all future incoming mail saved?

You don't strictly need to compress existing mail. It should handle a
mix of compressed and non-compressed messages in the same maildir. 



  -jf


Re: [Dovecot] Zlib plugin - when does it make sense?

2013-11-25 Thread Jan-Frode Myklebust
On Mon, Nov 25, 2013 at 02:47:33PM +0100, Frerich Raabe wrote:
 
 Interesting! What zlib compression level did you use? I figure even low
 levels would work rather well for plain text.

plugin {
zlib_save_level = 6 # 1..9
zlib_save = gz # or bz2
}

 Now that I think about plain text: I also have the fts_solr plugin
 enabled to speed up the occasional full-text search - does the indexing
 still work as before when the mail is compressed, i.e. is the
 reading of the mail centralized so that the individual plugins don't
 actually know or care? Or would I need to make sure I use
 'zlib-aware' plugins?

I don't have fts (yet).


  -jf


Re: [Dovecot] highly available userdb

2013-11-18 Thread Jan-Frode Myklebust
On Wed, Nov 13, 2013 at 01:52:09PM +1000, Nick Edwards wrote:
 On 11/12/13, Jan-Frode Myklebust janfr...@tanso.net wrote:
  My installation is only serving 1/10 of your size, but long time ago we
  migrated off mysql for userdatabase, and over to LDAP. The MySQL data
  source had issues (not dovecot related), and didn't seem like the right
  tool for the job.
 
 
 A database is a database, a master is a master, and a slave is a slave

And some databases are better for some tasks than others. F.ex. LDAP
gives dovecot free failover between servers. Handled in the ldap
libraries. One could argue that you should be complaining to the MySQL
developers about supporting server failover in the client library, and
not to Dovecot.

 our mysql has never had problem, not a single one, its why I'm so annoyed
 dovecot is talking to master when it doesn't need to



  -jf


Re: [Dovecot] Dovecot MTA

2013-11-11 Thread Jan-Frode Myklebust
On Fri, Nov 08, 2013 at 04:22:13PM +0100, Timo Sirainen wrote:
 
 Ah, I had actually been mostly just thinking about inbound SMTP features.
 It should of course support outbound SMTP as well, but I’m less familiar
 about what functionality would be useful for that.

Outbound is mostly the same as inbound, except requirement of
authentication and TLS for the submission (587/tcp) and ssl wrapped
smtps port 465/tcp.

Features we'd want:

* authentication
* per user rate limiting might be handled by Dovecot MTA instead of
  external program? 
* spam filtering trough milter?
* virus filtering trough milter?
* Per user Geo-blocking would be great!
* Protection from password guessing ?

Plus it would be great if it could check if the authentication is still
valid when re-using connection, ref missing feature in postfix:


http://postfix.1071664.n5.nabble.com/Solution-to-SMTPAuth-compromised-accounts-td61415.html



  -jf


Re: [Dovecot] Dovecot MSA - MTA

2013-11-11 Thread Jan-Frode Myklebust
On Sun, Nov 10, 2013 at 12:59:27AM +0100, Reindl Harald wrote:
 
 everywhere else you have sender-dependent relay hosts, RCPT dependent 
 relayhosts
 and all sort of aliases which you *do not* want treated different between
 incoming mail from outside or a internal server and submission mail
 
 the only real difference between submission is that it is authenticated
 and because the authentication a few restrictions are not applied

I don't quite agree. Our smarthosts has very little local knowledge
about where to route messages. They only follow MX records. It's the
incoming SMTP servers that has all the knowledge, and needs to be robust
against millions of broken mailservers and spam-bots on the internet.

The fact that submission is authenticated is an opportunity to integrate
better with the userdb than postfix and exim does. To have native
quotas, brute-force protection, per user geo-blocking, etc.. that are
difficult to achieve with the general SMTP servers.


 
 but in usual there is and must not be any difference in the mail-routing

Incoming SMTP servers routes to dovecot-LMTP, internal exchange, and
more. Outgoing would only need to follow MX (even for messages heading
back home to us).



  -jf


Re: [Dovecot] server side private/public key

2013-11-11 Thread Jan-Frode Myklebust
Serverside private key probably doesn't protect against much, but a way for 
users to upload a public key and automatically encrypt all messages when 
received might have value. Limits exposure for messages at rest.


   -jf

 Den 11. nov. 2013 kl. 15:21 skrev Peter Mogensen a...@one.com:
 
 *Christian Felsing wrote:
 *
  Please consider to add server side private/public key encryption for 
  incoming mails.
  If client logs on, the password is used to unlock users server side private 
  key.
  If mail arrives from MTA or any other source, mail is encrypted with users 
  public key.
  Key pair should be located in LDAP or SQL server. PGP and S/MIME should be 
  supported.
 
 This is for the situation if NSA or other organizations asks admin for
 users mail insistently,
 
 So ... exactly which security threat are you thinking about preventing here?
 
 This won't protect against:
 * NSA listening in on the mails when they arrive.
 * NSA taking a backup of your mails and wait for your first attempt to read 
 them - at which time they'll have your private key in plain text.
 
 It seems like a much wider protection to just keep you private key for your 
 self.
 
 /Peter
 


Re: [Dovecot] highly available userdb (Was: Re Dovecot MTA)

2013-11-11 Thread Jan-Frode Myklebust
My installation is only serving 1/10 of your size, but long time ago we
migrated off mysql for userdatabase, and over to LDAP. The MySQL data
source had issues (not dovecot related), and didn't seem like the right
tool for the job.

Initially we kept mysql as the authoritative database over our users, and
mirrored the user details over to LDAP/389ds -- which we pointed dovecot and
postfix to. Then eventually we migrated completely out of MySQL as user
database. LDAP/389ds gives us easy multimaster replication, easy
integration with dovecot, postfix, etc., client side support for
failover between servers, and it is very fast. I don't think we've ever
had any issue with the userdb after migrating to LDAP.

our two 389ds servers are doing about 80 ldap bind() authentications per
second (plus dovecot auth cache is masking a lot more), 300 searches/s
and are using about 20% of a single cpu core each.

So, I would very much recommend you look into if something similar can
work for you.



  -jf

On Mon, Nov 11, 2013 at 03:24:46PM +1000, Edwardo Garcia wrote:
 My company have 36 dovecots, one biggest ISP in country 3 million user,
 agree with Nick  poster, we had stop use dovecot load balance because too
 bad effect on primary database, now use single localhost, we have script
 run every 30 second to test login, if fail sleep 30 second, try again, fail
 and down ethernet interface so hardware load balancer see server not answer
 and can not use, nagios soon tell us of problem, very very bad and stupid
 way, but only option is safe, we have look at alternative to dovecot for
 this and still look, not happy with unreliable softwares to immitate
 feature.
 
 big network mean big time locate and fix problem when arise so you be good
 to say no extra point of failure. Too many cog in chain eventually lead to
 problem.
 
 Timo pleaz reconsider feature
 
 
 On Sun, Nov 10, 2013 at 4:21 PM, Nick Edwards nick.z.edwa...@gmail.comwrote:
 
  On 11/9/13, Timo Sirainen t...@iki.fi wrote:
   On 9.11.2013, at 5.11, Nick Edwards nick.z.edwa...@gmail.com wrote:
  
   On 11/9/13, Michael Kliewe mkli...@gmx.de wrote:
   Hi Timo,
  
   I would also, like others, see you mainly working on Dovecot as an IMAP
   server. As far as I can see there are many things on the roadmap, and I
   hope many more will be added (for example a built-in health-checker for
   director backends).
  
   Only if you have enough personal resources and Dovecot as an IMAP
  server
   will not loose your attention, I would love to see your expertise in
   making a better MTA.
  
   Yes, some of us have been waiting for some years now, for a
   configurable change to alter the method of dovecots method of
   failover, which is just load balancing between servers rather than
   true failover, like postix, I see now why it gets no importance.
  
   Ah, you’re talking about SQL connections. Had to look up from old mails
  what
   you were talking about. It hasn’t changed, because I think the current
   behavior with load balancing + failover is more useful than
  failover-only.
   And you can already do failover-only with an external load balancer.
  Sure,
   Dovecot could also implement it, but it’s not something I especially
  want to
   spend time on implementing.
  
 
  My employer has 18 pop3 servers, one imap customer access (imap here
  has so little use we cant justify a redundant machine, not for 11,
  yes, eleven only users after 2 years of offering imap , and 2 imap
  (webmail).
 
  Sp, each server has a replicated mysql database
 
  If I use your better method, I have 18 machines polling themselves
  and the MASTER server, this needlessly slams the daylights out of  the
  master as I'm sure even you can imagine.
 
  We have 4 customer relay smtp servers and 4 inbound smtp servers,
  postifx, using its failover and better method, means they only hit
  the master server when the local mysql unix socket is not listening,
  ie, mysqld  is stopped -  the master server NEVER sees them.
 
  How is your method, better than true failover like method used by
  postfix, your methods is load balancing, it is not failover, and
  causes problems on larger networks
 
  I'm sure in some cases most people using it are happy and wont have
  performance increases noticeable, but if you are going to offer a
  backup for auth, it really shoulds be able to configure, if we want it
  to DoS our master, or only talk to master when it cant talk local, so
  I think it should be matter you need to consider, else you are only
  half arsed doing it, and like implying we should go introduce a
  further point of failure, by using yet more third party softwares
 


Re: [Dovecot] mdbox - healthy rotation size vs default

2013-08-26 Thread Jan-Frode Myklebust
On Mon, Aug 26, 2013 at 03:31:20PM -0400, Charles Marcus wrote:
 On 2013-08-26 2:58 PM, Michael Grimm trash...@odo.in-berlin.de wrote:
 As a very rough estimate I do estimate a 5% waste of space
 regarding deleted messages. But, my handful users are very
 disciplined in purging their deleted messages on a regular basis
 (I told them to do), and thus my regular doveadm purge -A runs
 will reduce that amount of wasted disk space to a minimum.
 
 Are you sure about that? There was a thread a while back (I
 recently posted a response to it) about this, and it sounded like
 the mdbox files would *never* be 'compacted' (reduced in size from
 deleted messages)... my reply was on 8/23, thread titled Dovecot
 never release preallocated space in mdbox'...
 
 Ooops, sorry, it was about *automatically* compacting them... I think...
 

And Timo seemed to reply that hole punching was something doveadm
purge could conceivably do, but doesn't do at the moment. Timo, could
you please clearify a bit here?

Does non-preallocated (mdbox_preallocate_space=no) m-files get hole
punched (or space re-used for new messages) after running doveadm
purge? Or can we end up with a huge $mdbox_rotate_size size m-file,
with only a single small message remaining after all other
messages has been purged?


  -jf


Re: [Dovecot] mdbox - healthy rotation size vs default

2013-08-25 Thread Jan-Frode Myklebust
On Sat, Aug 24, 2013 at 10:47:56AM +0200, Michael Grimm wrote:
 
 I am running mdbox_rotate_size = 100m for approx. a year now on
 a small server (a handful of users, only). All mailboxes are around
 1G each with a lot of attachments. I never had an issue so far.

How much space are your mdboxes using, compared to to quota usage?
I.e. how much space is wasted on deleted messages?

(not sure this will be easy to measure, because of compression..)


  -jf


Re: [Dovecot] Dovecot tuning for GFS2

2013-08-25 Thread Jan-Frode Myklebust
On Thu, Aug 22, 2013 at 08:57:40PM -0500, Stan Hoeppner wrote:
 
 
 130m to 18m is 'only' a 7 fold decrease.  18m inodes is still rather
 large for any filesystem, cluster or local.  A check on an 18m inode XFS
 filesystem, even on fast storage, would take quite some time.  I'm sure
 it would take quite a bit longer to check a GFS2 with 18m inodes.  


We use GPFS, not GFS2. Luckily we've never needed to run fsck on it, but
it has support for online fsck so hopefully it would be bareable (but
please, lets not talk about such things, knock on wood).

 Any reason you didn't go a little larger with your mdbox rotation
 size?

Just that we didn't see any clear recommendation/documentation for
why one would want to switch from the default 2MB. 2 MB should already
be packing 50-100 messages/file, so why are we only seeing 7x decrease
in number of files.. Hmm, I see the m-files isn't really utilizing 2 MB.
Looking at my own mdbox-storage I see 59 m-files, using a total of 34MB
(avg. 576KB/file)-- with sizes ranging from ~100 KB to 2 MB.  Checking our
quarantine mailbox I see 3045 files, using 2.6GB (avg. 850KB/file).

Guess I should look into changing to a larger rotation size.

BTW, what happens if I change the mdbox_rotate_size from 2MB to 10MB?
Will all the existing 2MB m-files grow to 10MB, or is it just new
m-files that will use this new size? Can I get dovecot to migrate out of
the 2MB files, and reorganize to 10MB files ?


   -jf


Re: [Dovecot] Dovecot tuning for GFS2

2013-08-21 Thread Jan-Frode Myklebust
On Wed, Aug 21, 2013 at 02:18:52PM +0200, Andrea gabellini - SC wrote:
 
 So you are using the same config I'm testing. I forgot to write that I
 use maildir.

I would strongly suggest using mdbox instead. AFAIK clusterfs' aren't
very good at handling many small files. It's a worst case random I/O 
usage pattern, with high rate of metadata operations on top.

We use IBM GPFS for clusterfs, and have finally completed the conversion
of a 130+ million inode maildir filesystem, into a 18 million inode mdbox
filesystem. I have no hard performance data showing the difference
between maildir/mdbox, but at a minimum mdbox is much easier to manage.
Backup of 130+ million files is painfull.. and also it feels nice to be
able do schedule batches of mailbox purges to off-hours, instead of doing
them at peak hours.

As for your settings, we use:

mmap_disable = yes  # GPFS also support cluster-wide mmap, but for 
some reason we've disabled it in dovecot..
mail_fsync = optimized
mail_nfs_storage = no
mail_nfs_index = no
lock_method = fcntl

and of course Dovecot Director in front of them..



  -jf


Re: [Dovecot] The docs a re a bit weird on Directory hashing

2013-08-11 Thread Jan-Frode Myklebust
On Fri, Aug 09, 2013 at 12:02:34AM +0300, Eliezer Croitoru wrote:
  
  I use:
  
  mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln
 R what for??
 I do understand a Lower case on the names and have seen the effect but
 how would R be helpful??
 

According to http://wiki2.dovecot.org/Variables

  %H hash function is a bit bad if all the strings end with the same
   text, so if you're hashing usernames being in user@domain form, you
   probably want to reverse the username to get better hash value variety,
   e.g. %3RHu. 



  -jf


Re: [Dovecot] The docs a re a bit weird on Directory hashing

2013-08-08 Thread Jan-Frode Myklebust
On Thu, Aug 08, 2013 at 01:42:43AM +0300, Eliezer Croitoru wrote:
 
 And means a two layers cache of max 16 directories on the first layer
 and 256 directories on the second layer.
 The above allows millions of files storage and can benefit from all ext4
 lower kernel levels of compatibly rather then do stuff on the user-land..
 Since I am not 100% sure that the scheme I understood is indeed what I
 think I assume the above will need a small correction.

I use:

mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln

which gives me 256 buckets containing domainname/username/, and the
buckets are a hash of Lowercase Reverse usernames. To get the same
layout as squid, I would try:

mail_home = /srv/mailstore/%16LRHu/%256LRHu/%Lu

Ref: http://wiki2.dovecot.org/Variables for variables and modifiers.

BTW: I'm lowercasing everything, because I once got bitten by a variable
not being lowercased in one version, and suddenly this changing in
another version. It's probably redundant here -- but it was painful to
fix when it happened..


  -jf


Re: [Dovecot] convert to mdbox

2013-07-29 Thread Jan-Frode Myklebust
On Tue, Jul 23, 2013 at 10:08:57AM +0300, Birta Levente wrote:
 
 How can I convert all virtual mailboxes from maildir to mdbox?
 Manually, one by one, working, but I have a lot ...

I've converted around 4-500.000 users from maildir to mdbox by the
following on a server configured for using MDBOX as default:

  1 - Search for all users with mailMessageStore attribute in LDAP
  2 - Convert user to mdbox:
dsync -v -u $username mirror maildir:$maildir + check returncode
dsync -v -u $username mirror maildir:$maildir + check returncode
  3 - Delete mailMessageStore attribute from LDAP and add mailLocation: 
mdbox:~/mdbox
  4 - pkill -HUP -u dovecot -f dovecot/auth -- to make sure auth cache is 
updated
  5 - doveadm kick $username -- on all servers, in case user was logged in..
  6 - Do final sync: dsync -v -u $username mirror maildir:$maildir
  7 - Delete maildir.


Only 26554 users left to convert..



  -jf


Re: [Dovecot] Dovecot never release preallocated space in mdbox

2013-07-29 Thread Jan-Frode Myklebust
On Mon, Jul 29, 2013 at 11:48:00AM +0200, Stéphane BERTHELOT wrote:
 
 mdbox_rotate_size = 128M
 mdbox_rotate_interval = 1d
 mdbox_preallocate_space = yes


 On mailboxes patterns with low incoming mail ( 100kb / day) this
 would waste much space. Of course I can decrease rotate size a lot
 but it would then produce a lot of files and would certainly become
 similar performance-wise to sdbox/maildir/...

128MB is quite a large rotate size if you care about disk space.. We use
the default 2 MB, which still packs quite a lot of messages per file
compared to maildir. Single maildir-files seems to be around 5-30KB
(compressed), which should amount to 50-400 messages per m-file. I don't
think that should be similar to maildir/sdbox performance-wise.


   -jf


Re: [Dovecot] login_trusted_networks from webmail ?

2013-07-12 Thread Jan-Frode Myklebust
On Thu, Jul 04, 2013 at 08:51:47PM +0200, Benny Pedersen wrote:
 Timo Sirainen skrev den 2013-07-03 22:34:
 
 If backend has login_trusted_networks pointing to directors, then the
 IP gets forwarded to backends as well.
 
 how does imap get ip from http ?

The webmail-server will use the HTTP REMOTE_ADDR header in the IMAP ID
when initiating the IMAP connection.

a ID (x-originating-ip $REMOTE_ADDR)


  -jf


[Dovecot] login_trusted_networks from webmail ?

2013-07-03 Thread Jan-Frode Myklebust
I'd like to get the IP-address of the webmail-klient logged in my
maillog (for being compliant with coming data retention policies). I've
noticed that with login_trusted_networks pointing at my dovecot
directors, we get rip=client-ip logged on the backends. How is the proxy
providing this to the dovecot backends? Anybody know what magic we need
to implement in our webmail-solution to be able to forward the
webmail-client-ip and have it logged as rip= in dovecot?

I belive it will be enough to have it logged as rip= on the directors,
maybe not needed to be forwarded all the way to the backends (but that
would be nice as well).



   -jf


Re: [Dovecot] login_trusted_networks from webmail ?

2013-07-03 Thread Jan-Frode Myklebust
On Wed, Jul 03, 2013 at 11:34:56PM +0300, Timo Sirainen wrote:
 
 a ID (x-originating-ip 1.2.3.4)

Perfect, thanks! Feature request for SOGo filed:

http://www.sogo.nu/bugs/view.php?id=2366


  -jf


Re: [Dovecot] Dovecot + SELinux permission problems

2013-06-23 Thread Jan-Frode Myklebust
On Sun, Jun 23, 2013 at 04:21:17PM +0100, Johnny wrote:
 
 I had thought SELinux would log something, but /var/log/audit/audit.log
 is blank...

Are you running auditd? I believe that if you're not running auditd, the
denials should be logged to the kernel ring buffer. Does dmesg show
any denials ?

Likely dovecot doesn't have access user_home_dir_t/user_home_t. Is all
users maildirs below /home/user/data1/Maildir/ ? If so, you can probably
fix this by creating a labeling rule for this, and re-label everything
below this directory:

semanage fcontext -a -t mail_spool_t /home/user/data1/Maildir(/.*)?
restorecon -R /home/user/data1/Maildir


  -jf


[Dovecot] valid folder name

2013-06-08 Thread Jan-Frode Myklebust
We have a quite large user base, with lots of bad folder names because
the mail folders was earlier accessible outside of dovecot. Now we're
running dsync conversions from maildir to mdbox for all users, but are
struggelig a bit with dsync not liking invalid folder names.

Before we convert a user we try to determine if the folder names are
valid, but we don't have a very good regexp for validating it. Maybe
someone else knows a way to verify (and fix?) folder names that are
invalid.

The rules I know is:

 - name doesn't start with '.' or '~' (after the initial '.')
 - name doesn't end with '.'
 - the name doesn't contain '..'
 - the name is valid mUTF7

So, any regexp gurus that can distill those rules down to something
usable?



  -jf

-- 
Some people, when confronted with a problem, think I know, I'll use
regular expressions. Now they have two problems. -jwz


Re: [Dovecot] Administrative mailbox deletions

2013-06-04 Thread Jan-Frode Myklebust
On Tue, Jun 04, 2013 at 11:58:50AM +0100, Alan Brown wrote:
 
 
 One user has named all his mailboxes with leading hyphens.
 
 ie:
 
 -foo
 -bar
 -bazz

 Does anyone have the magic sauce needed to escape the - character?

No idea what you've tried, but maybe '--' is enough?

doveadm mailbox delete -u $user -- -foo


  -jf


[Dovecot] Bad exit status from dsync

2013-06-03 Thread Jan-Frode Myklebust

I just tried to migrate one of my users from maildir to mdbox using
dsync. My conversion script is checking the dsync exit code to know if
the conversion goes fine or not, and surprisingly dsync returned 0 at
the same time as it gave the error:

Error: Failed to sync mailbox .ta\ vare\ på ... 
(sorry, lost the rest of the error message)

Changing the folder name to mUTF7 manually made it work, but I didn't
like that dsync returned success when it got this error. That breaks the
failsafe logic in my conversion script.

Dovecot version dovecot-ee-2.1.16.3-1, x86_64, RHEL5.
Dsync command used:

dsync -v -u usern...@example.net mirror 
maildir:/usr/local/atmail/users/u/s/usern...@example.net

With these dovecot.conf settings:

mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln
mail_location = mdbox:~/mdbox


  -jf


Re: [Dovecot] Please clarify one point for me on director userdb (Was: Configuration advice needed.)

2013-06-03 Thread Jan-Frode Myklebust
On Mon, Jun 03, 2013 at 03:47:08PM +0200, Olivier Girard wrote:
 I'm trying to finish my dovecot setup but things are unclear for me.
 
 I want director proxying mapping to same server for LMTP and POP/IMAP
 connections. My authdb is LDAP and LMTP user are queried with mail
 adress (ldap mail attribute) while IMAP/POP users are identified
 with uid (ldap uid attribute) wich is completly different.
 
 So i end up defining my ldap querys mapping ldap mail attribute to user
 in *_attrs (best choice for future use than uid for our setup) with this
 configuration in dovecot-ldap.conf.ext:
 
 uris = ldap://ldap.uang
 dn = cn=acces-smtp, ou=access, dc=univ-angers, dc=fr
 dnpass = *
 base = ou=people, dc=univ-angers, dc=fr
 user_attrs = mail=user,homeDirectory=home
 user_filter = ((|(uid=%u) (mail=%u) 
 (auaAliasEmail=%u))(|(auaStatut=etu)(auaStatut=etu-sortant)(auaStatut=perso)(auaStatut=perso-sortant)))
 pass_attrs = mail=user,userPassword=password
 pass_filter = ((|(uid=%u) (mail=%u) (auaAliasEmail=%u)) 
 (|(auaStatut=etu)(auaStatut=etu-sortant)(auaStatut=perso)(auaStatut=perso-sortant)))
 iterate_attrs = mail=user
 iterate_filter = 
 (|(auaStatut=etu)(auaStatut=etu-sortant)(auaStatut=perso)(auaStatut=perso-sortant))
 default_pass_scheme = MD5-CRYPT
 
 Is it the correct method, or do i miss something?
 

It's a bit hard to tell what's unclear to you. This all looks perfectly
fine to me. I run a similar configuration, except:

- I don't have any ldap config on the directors, just a static
  passdb:

passdb {
args = proxy=y nopassword=y
driver = static
}

- I use auth binds, instead having dovecot do the
  authentication. IMHO that's better, since then there's no
  easy way to extract all the hashes from the dovecot side.

auth_bind = yes
auth_bind_userdn = uid=%n,ou=people,o=%d,o=ISP,o=example,c=NO

- I haven't configured any
  iterate_attrs/iterate_filter/pass_attrs/iterate_filter or
  default_pass_scheme. Have too many users to ever want to 
  iterate over them all :-)


  -jf


Re: [Dovecot] dovecot stats

2013-05-08 Thread Jan-Frode Myklebust
On Mon, May 6, 2013 at 4:37 PM, Timo Sirainen t...@iki.fi wrote:

 On 29.4.2013, at 15.30, Jan-Frode Myklebust janfr...@tanso.net wrote:


  Is it possible to collect info about POP3 and LMTP commands also ?

 No. I think they would be pretty boring statistics, since with POP3 pretty
 much everything causing disk I/O or CPU usage would be RETRs and with LMTP
 everything would be DATAs.


I think knowing the timings of writing messages to disk / reading from disk
would be very interesting and relevant data. Especially for us with mostly
POP3 clients, where LMTP DATAs and POP3 RETRs probably is accounting for
major parts of the server load.


  Also, is doveadm stats dump command telling me the results of all
  commands that has finished the last stats_command_min_time, or will it
  maybe contain much more than 1 minute of activity ?

 It can contain much more. The stats process will keep as much data in
 memory as possible until it reaches the stats_memory_limit. The doveadm
 stats dump lists everything that the stats process knows.



Ok, then I guess we'll need to limit our stats dumps based on last_seen.


  -jf


Re: [Dovecot] dovecot stats

2013-05-08 Thread Jan-Frode Myklebust
Thanks, nice graphs. I've attached a graph over LMTP delays per minute as
seen from the postfix side on one of our servers. This includes delays
caused by both delivery to dovecot LMTP, and also LMTP communication
internally on the mailservers between postfix and amavis. Unfortunately it
says nothing about the delivery time to each individual dovecot backend,
since these are hiding behind dovecot director, and therefor we have no way
of knowing which of our backends are slow (if any).



  -jf




attachment: lmtp-delays.png

Re: [Dovecot] Mail deduplication

2013-04-30 Thread Jan-Frode Myklebust
Wasn't there also some issue with cleanup of attachments ? Not being able
to delete the last copy, or something. I did some testing of using SIS on a
backup dsync destination a year (or two) ago, and got quite confused..
Don't quite remember the problems I had, but I did lose confidence in it
and decided having the attachement together with the messages felt safest.

I would also love to hear from admins using it on large scale (100K+ active
users). Maybe we should reconsider using it..



  -jf


On Tue, Apr 30, 2013 at 9:04 AM, Arnaud Abélard 
arnaud.abel...@univ-nantes.fr wrote:

 On 04/30/2013 08:05 AM, Angel L. Mateo wrote:

 El 30/04/13 03:28, Tim Groeneveld escribió:


 Hi Guys,

 I am wondering about mail deduplication. I am looking into the
 possibility
 of seperating out all of the message bodies with multiple parts inside
 mail
 that is recived from `dovecot` and hashing them all.

 The idea is that by hashing all of the parts inside the email, I will be
 able to ensure that each part of the email will only be saved once.

 This means that attachments  common parts of the body will only be
 saved once inside the storage.

 How achievable would this be with the current state of dovecot? Would it
 even be worth doing?

   I asked the same question recently. As Timo responsed at
 http://kevat.dovecot.org/list/**dovecot/2013-March/089072.htmlhttp://kevat.dovecot.org/list/dovecot/2013-March/089072.htmlit
  seems
 that this feature is production stable in recent versions of dovecot.

  And I think it is worth. My estimations (with just about 10 users
 of my organization, they are no accurate) is that you can save more than
 30% of total mail storage.

  To configure it you need to use options:

 * mail_attachment_dir
 * mail_attachement_min_size
 * mail_attachment_fs
 * mail_attachment_hash

  Hello,

 Is it just working or is it working in a optimal way? back in October 2011
 we noticed that the deduplication wasn't working as well as we were
 expecting as some files weren't properly deduplicated (
 http://markmail.org/message/**ymfdwng7un2mj26zhttp://markmail.org/message/ymfdwng7un2mj26z).
 Timo did you ever hit that bug and got it fixed if there was anything to
 fix on your side?

 Since we are very interrested in this feature I am very eager to hear
 about admins using it on a similar scale (around 80,000 mailboxes).

 Thanks,

 Arnaud





 --
 Arnaud Abélard (jabber: arnaud.abel...@univ-nantes.fr)
 Administrateur Système - Responsable Services Web
 Direction des Systèmes d'Informations
 Université de Nantes
 -
 ne pas utiliser: trapem...@univ-nantes.fr



[Dovecot] dovecot stats

2013-04-29 Thread Jan-Frode Myklebust
I just upgraded one of our servers to dovecot v2.1.16 (ee), and am looking
into the stats feature. Am I interpreting the wiki correct in reading that
the doveadm stats dump command only returns statistics about IMAP
commands?

Is it possible to collect info about POP3 and LMTP commands also ?

Also, is doveadm stats dump command telling me the results of all
commands that has finished the last stats_command_min_time, or will it
maybe contain much more than 1 minute of activity ?



  -jf


Re: [Dovecot] Migration from v1 to v2 with hashed directory structure

2013-03-05 Thread Jan-Frode Myklebust
On Thu, Feb 28, 2013 at 02:59:52PM +0100, Pavel Dimow wrote:
 
  we have /var/spool/vmail/mydomain.com/u...@mydomain.com and I want a
 new server with version 2 to have
 hashed directory structure like /var/spool/vmail/mydomain.com/u/s/user
 I was wondering it f there is some better solution then dir hashing or
 a way to hash a dir other then first two letters.

We use:

mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln

giving us 256 buckets based on Lowercase, Reversed Hash of username.
Ref: http://wiki2.dovecot.org/Variables. 

 Also any suggestion how to perform this migration from old to new
 server with hashing on the fly?

Symlinks from old to new.. 



   -jf


Re: [Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?

2013-02-05 Thread Jan-Frode Myklebust
I think there must be some bug I'm hitting here. One of my directors
is still running with client_limit = 1, process_limit = 100 for the
lmtp service, and now it's logging:

   master: Warning: service(lmtp): process_limit (100) reached, client
connections are being dropped

Checking sudo netstat -anp|grep :24  I see 287 ports in TIME_WAIT,
one in CLOSE_WAIT and the listening 0.0.0.0:24. No active
connections. There are 100 lmtp-processes running. When trying to
connect to the lmtp-port I immediately get dropped:

$ telnet localhost 24
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
Connection closed by foreign host.

Is there maybe some counter that's getting out of sync, or some back
off penalty algorithm that kicks in when it first hit the process
limit ?


  -jf


Re: [Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?

2013-02-02 Thread Jan-Frode Myklebust
On Fri, Feb 1, 2013 at 11:00 PM, Timo Sirainen t...@iki.fi wrote:
 On 1.2.2013, at 19.00, Jan-Frode Myklebust janfr...@tanso.net wrote:

 Have you checked if there's an increase in disk I/O usage, or system cpu 
 usage?


On the directors, cpu usage, and load averages seems to have gone down
by about 50% since the upgrade. On the backend mailstores running
2.0.14 I see no effect (but these are quite busy, so less LMTP might
just have lead to better response on other services).

 Or actually .. It could simply be that in v2.0.15 service lmtp { client_limit 
 } default was changed to 1 (from default_client_limit=1000). This is 
 important with the backend, because writing to message store can be slow, but 
 proxying should be able to handle more than 1 client per process, even with 
 the new temporary file writing. So you could see if it helps to set lmtp { 
 client_limit = 100 } or something.


My backend lmtp services are configured with client_limit = 1,
process_limit = 25, and there are 6 backends I.e. max 150 backend LMTP
processes if all lmtp is spread evenly between the backends, which it
woun't be since backends are weighted differently (2x 50, 2x75 and
2x100).

I assume each director will max proxy process_limit*client_limit to my
backends. Will it be OK to have a much higher
process_limit*client_limit on the directors than on the backends? It
will not be a problem if directors are configured to seemingly handle
a lot more simultaneous connections than the backends?


  -jf


[Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?

2013-02-01 Thread Jan-Frode Myklebust
We upgraded our two dovecot directors from v2.0.14 to dovecot-ee
2.1.10.3 this week, and after that mail seems to be flowing a lot
slower than before. The backend mailstores are untouched, on v2.0.14
still. After the upgrade we've been hitting process_limit for lmtp a
lot, and we're struggeling with large queues in the incoming
mailservers that are using LMTP virtual transport towards our two
directors.

I seem to remember 2.1 should have a new lmtp-proxying code. Is there
anything in this that maybe needs to be tuned that's different from
v2.0 ? I'm a bit scheptical to just increasing the process_limit for
LMTP proxying, as  I doubt running many hundreds of simultaneous
deliveries should work that much better against the backend storage..


## doveconf -n ##
# 2.1.10.3: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.32.1.el5 x86_64 Red Hat Enterprise Linux Server
release 5.5 (Tikanga)
default_client_limit = 4000
director_mail_servers = 192.168.42.7 192.168.42.8 192.168.42.9
192.168.42.10 192.168.42.28 192.168.42.29
director_servers = 192.168.42.15 192.168.42.17
disable_plaintext_auth = no
listen = *
lmtp_proxy = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
passdb {
  args = proxy=y nopassword=y
  driver = static
}
protocols = imap pop3 lmtp sieve
service anvil {
  client_limit = 6247
}
service auth {
  client_limit = 8292
  unix_listener auth-userdb {
user = dovecot
  }
}
service director {
  fifo_listener login/proxy-notify {
mode = 0666
  }
  inet_listener {
port = 5515
  }
  unix_listener director-userdb {
mode = 0600
  }
  unix_listener login/director {
mode = 0666
  }
}
service imap-login {
  executable = imap-login director
  process_limit = 4096
  process_min_avail = 4
  service_count = 0
  vsz_limit = 256 M
}
service lmtp {
  inet_listener lmtp {
address = *
port = 24
  }
  process_limit = 100
}
service managesieve-login {
  executable = managesieve-login director
  inet_listener sieve {
address = *
port = 4190
  }
  process_limit = 50
}
service pop3-login {
  executable = pop3-login director
  process_limit = 2048
  process_min_avail = 4
  service_count = 0
  vsz_limit = 256 M
}
ssl_cert = /etc/pki/tls/certs/pop.example.com.cert.ca-bundle
ssl_key = /etc/pki/tls/private/pop.example.com.key
protocol lmtp {
  auth_socket_path = director-userdb
}




  -jf


Re: [Dovecot] Mixing v2.1 and 2.0 directors ?

2012-12-28 Thread Jan-Frode Myklebust
On Fri, Dec 28, 2012 at 2:02 AM, Timo Sirainen t...@iki.fi wrote:


 The new v2.1 director code can handle running with old v2.1 directors (there 
 were some protocol changes that improve things). I think v2.0 director is 
 protocol compatible with the old v2.1 directors, so I think in theory it 
 should work.. But it's definitely not ever been tested in practise, and v2.1 
 did fix a ton of director bugs. So if you end up testing it, I think you 
 should be ready to quicky upgrade the other director as well if any errors 
 show up in logs.


Ok, I don't think I want to test this -- realistic testing is too
hard. I'll rather upgrade the old directors (keeping same
ip-addresses), so that I quickly can rollback in case something
doesn't work as well as expected.

BTW: What's the status of LMTP proxying in v2.1 (or more specifically
dovecot-ee-2.1.10.3-1)? Do you know of many users of it, and has it
proven itself much better than v2.0.14 ?  I intend to upgrade the
directors first, and leave the backend servers running v2.0.14 for a
while.. that should be OK, right ?


   -jf


[Dovecot] Mixing v2.1 and 2.0 directors ?

2012-12-27 Thread Jan-Frode Myklebust
I'm preparing to set up a new set of directors on
dovecot-ee-2.1.10.3-1, but would prefer to do this a bit gradually.
Will it be OK to set up a ring of directors with 2x
dovecot-ee-2.1.10.3-1 and 2x dovecot-2.0.14 ?



  -jf


Re: [Dovecot] Commercial features in Dovecot future: Object storage, archive

2012-11-13 Thread Jan-Frode Myklebust
On Mon, Nov 12, 2012 at 12:33 PM, Timo Sirainen t...@iki.fi wrote:
 Hi all,

 Dovecot Oy’s web pages at www.dovecot.fi have been updated. The products page 
 lists two features that will be available for commercial licensing, extending 
 the functionality of the basic open-source version of Dovecot.

 * Storing emails to (high-latency) object storage, initially supporting 
 Amazon S3, Caringo CAStor and Scality.

 * Email archive storage.

 See http://www.dovecot.fi/products/index.html for details.

404 file not found, but it was not too difficult to guess where you meant.

I'm not too interested in the extended functionality, but the extra
tested, bugfix-only/mainly Enterprise Release sounds very interesting.
That page isn't quite clear on if the enterprise release is meant to
be free or not (Some features may require license fees). Could you
please clarify? Is it available already?

We're starting to be long overdue for an overhaul of our installation
(currently on v2.0.14 + some fixes), so we need to do something
soon...


  -jf


Re: [Dovecot] LMTP benefit vs LDA

2012-11-03 Thread Jan-Frode Myklebust
On Sat, Nov 3, 2012 at 10:45 AM, Davide davide.mar...@mail.cgilfe.it wrote:
 Hi to all,
 my question is what is benefit implementing LMTP service replacing LDA i
 have dovecot 2.1.8 with vpoipmail+qmail and about 500 users now i'm using
 LDA and i'm interested on LMTP service.
 Thanks in advance


For us it has the benefit that we don't need to run any SMTP servers
on the backend dovecot servers, and we can have our frontend postfix
servers deliver incoming messages trough the dovecot director so that
the users are sticky to their servers. For a single server running
everything, I don't know if there's any point.


  -jf


Re: [Dovecot] POLL: v2.2 to allow one mail over quota?

2012-10-29 Thread Jan-Frode Myklebust


+1

Better to be lenient, than to confuse users by accepting some but not other 
messages.

I believe most larger mail providers has a max message size of around 64MB or 
less, so allowing the final message to exceed quota by about that sounds 
reasonable to me.

   -jf

Re: [Dovecot] trash plugin not doing it's job

2012-10-21 Thread Jan-Frode Myklebust
On Sat, Oct 20, 2012 at 3:51 PM, Daniel Parthey
daniel.part...@informatik.tu-chemnitz.de wrote:
 Jan-Frode Myklebust wrote:
 $ cat /etc/dovecot/dovecot-trash.conf.ext
 # Spam mailbox is emptied before Trash
 1 INBOX.Spam
 # Trash mailbox is emptied before Sent
 2 INBOX.Trash

 Are you sure the Trash Folder of the affected users is located below INBOX?
 doveadm mailbox list -u user@domain | grep -iE trash|spam

$ sudo doveadm mailbox list -u xx...@example.no
INBOX
INBOX.Drafts
INBOX.Sent
INBOX.Spam
INBOX.Trash


 Example at http://wiki2.dovecot.org/Plugins/Trash omits INBOX.
 Have you tried INBOX/Trash as mailbox name?

No, should I, when my prefix is INBOX. and separator is . ?

namespace {
  hidden = no
  inbox = yes
  list = yes
  location =
  prefix = INBOX.
  separator = .
  subscriptions = yes
  type = private
}


BTW: I think it's mostly working.. as the number or quota exceeded
messages has clearly dropped since implementing it, but I do find a
few users that get quota exceeded and has lots of messages in
INBOX.Trash og INBOX.Spam..



  -jf


[Dovecot] trash plugin not doing it's job

2012-10-18 Thread Jan-Frode Myklebust
I enabled the trash plugin yesterday, adding trash to mail_plugins,
and configuring the plugin setting trash =
/etc/dovecot/dovecot-trash.conf.ext.


But I still see users with lots of files in INBOX.Trash getting
bounced because of quota exceeded:


postfix/lmtp[26273]::  C89F490061: to=xx...@example.no,
relay=loadbalancers.example.net[192.168.42.15]:24, delay=1.2,
delays=0.61/0.02/0/0.54, dsn=5.2.2, status=bounced (host
loadbalancers.example.net[192.168.42.15] said: 552 5.2.2
xx...@example.no Quota exceeded (mailbox for user is full)
(in reply to end of DATA command))

dovecot::  lmtp(19730, ...@example.no): Error:
BErxFCyrf1ASTQAAWNPRnw: sieve:
msgid=e33d481dc9d9442fa79f55e45a516c82@BizWizard: failed to store
into mailbox 'INBOX': Quota exceeded (mailbox for user is full)


$ sudo doveadm quota get -u x...@example.no
Quota name  Type
   Value   Limit  %
UserQuota
STORAGE 1048559 1048576 99
UserQuota
MESSAGE4487   -  0


Postfix if delivering via LMTP trough dovecot director.


Anybody see anything obvious in my config:


# 2.0.14: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.26.1.el5 x86_64 Red Hat Enterprise Linux Server
release 5.5 (Tikanga)
auth_cache_size = 100 M
auth_verbose = yes
auth_verbose_passwords = sha1
disable_plaintext_auth = no
login_trusted_networks = 192.168.0.0/16 109.247.114.192/27
mail_gid = 3000
mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln
mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u
mail_max_userip_connections = 20
c = quota zlib trash
mail_uid = 3000
maildir_stat_dirs = yes
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date
mmap_disable = yes
namespace {
  inbox = yes
  location =
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  quota = dict:UserQuota::file:%h/dovecot-quota
  sieve = /sieve/%1Lu/%1.1Lu/%Lu/.dovecot.sieve
  sieve_before = /etc/dovecot/sieve/dovecot.sieve
  sieve_dir = /sieve/%1Lu/%1.1Lu/%Lu
  sieve_max_script_size = 1M
  trash = /etc/dovecot/dovecot-trash.conf.ext
  zlib_save = gz
  zlib_save_level = 6
}
postmaster_address = postmas...@example.net
protocols = imap pop3 lmtp sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  client_limit = 4521
  unix_listener auth-userdb {
group =
mode = 0600
user = atmail
  }
}
service imap-login {
  inet_listener imap {
address = *
port = 143
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 1 G
}
service imap-postlogin {
  executable = script-login /usr/local/sbin/imap-postlogin.sh
}
service imap {
  executable = imap imap-postlogin
  process_limit = 2048
}
service lmtp {
  client_limit = 1
  inet_listener lmtp {
address = *
port = 24
  }
  process_limit = 25
  process_min_avail = 10
}
service managesieve-login {
  inet_listener sieve {
address = *
port = 4190
  }
  service_count = 1
}
service pop3-login {
  inet_listener pop3 {
address = *
port = 110
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 1 G
}
service pop3-postlogin {
  executable = script-login /usr/local/sbin/pop3-postlogin.sh
}
service pop3 {
  executable = pop3 pop3-postlogin
  process_limit = 2048
}
ssl = no
userdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
protocol lmtp {
  mail_plugins = quota zlib trash sieve
}
protocol imap {
  imap_client_workarounds = delay-newmail
  mail_plugins = quota zlib trash imap_quota
}
protocol pop3 {
  mail_plugins = quota zlib trash
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_uidl_format = UID%u-%v
}
protocol sieve {
  managesieve_logout_format = bytes=%i/%o
}


and my trash config:

$ cat /etc/dovecot/dovecot-trash.conf.ext
# Spam mailbox is emptied before Trash
1 INBOX.Spam
# Trash mailbox is emptied before Sent
2 INBOX.Trash

Global sieve script:

$ cat /etc/dovecot/sieve/dovecot.sieve

require [comparator-i;ascii-numeric,relational,fileinto,mailbox];
if allof (
not header :matches x-spam-score -*,
header :value ge :comparator i;ascii-numeric x-spam-score 10 )
{
discard;
stop;
}
elsif allof (
not header :matches x-spam-score -*,
header :value ge :comparator i;ascii-numeric x-spam-score 6 )
{
fileinto :create INBOX.Spam;
}


  -jf


[Dovecot] trash plugin together with sieve_before ?

2012-09-18 Thread Jan-Frode Myklebust
We have a sieve script doing sieve_before to sort spam to
spam-folders. Now I'm trying to configure the Trash plugin, but it
doesn't seem to work.. I noticed my config file says:

  # Space separated list of plugins to load (none known to be useful
so far). Do NOT
  # try to load IMAP plugins here.
  #mail_plugins =

and that doveconf doesn't list any plugins loaded for protocol
sieve. Should we load quota and trash here ?


  -jf


Re: [Dovecot] Dovecot performance under high load (vs. Courier)

2012-06-26 Thread Jan-Frode Myklebust
On Thu, Jun 21, 2012 at 11:44:33PM +0300, Timo Sirainen wrote:
  
  additionally you should install imapproxy on the webserver
  wehre your webmail is running and configure the webmail for
  using 127.0.0.1 - so only one connection per user is
  persistent instead make a new one for each ajax-request
 
 Someone benchmarked Dovecot a while ago in this list with and without 
 imapproxy and the results showed that imapproxy simply slowed things down by 
 adding extra latency. This probably isn't true for all installations, but I 
 don't think there's much of a difference either way.
 

That was me, there -  http://dovecot.org/list/dovecot/2012-February/063666.html



  -jf


Re: [Dovecot] Problem with lmtp director proxy

2012-06-12 Thread Jan-Frode Myklebust
On Tue, Jun 12, 2012 at 12:23:28PM +0200, Angel L. Mateo wrote:
 I have two director servers directing to 4 backend servers.

Which dovecot version are you running on your directors and backends?

We're running 2.0.14 plus the below linked patches and have not
since this problem since applying the last one.


http://hg.dovecot.org/dovecot-2.0/raw-rev/8de8752b2e94
http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c


  -jf


Re: [Dovecot] best practises for mail systems

2012-06-06 Thread Jan-Frode Myklebust
On Tue, Jun 05, 2012 at 11:59:47PM +1100, Костырев Александр Алексеевич wrote:
 
 I'm more worried about right design of mailstorage.. should I use some 
 cluster fs with all mail of all users
 or should I split mailstorage across servers and somehow avoid long downtime 
 if one of servers goes down.

A clusterfs gives you active/active high availability and balanced
distribution of users over your servers, at the cost of somewhat
degraded I/O performance all the time. If a single node will be able
to serve your load, I think it's much more sensible to create a
passive/standby availability solution based on a local filesystem (XFS).

If you need to split your mailstorage across servers, you can do
active/standby server pairs -- but then it gets difficult to balance
your users over your servers, and you *might* want to cheat and use a
clusterfs instead..


  -jf


Re: [Dovecot] Configuration advices for a 50000 mailboxes server(s)

2012-04-21 Thread Jan-Frode Myklebust
On Thu, Apr 19, 2012 at 07:31:13PM -0500, Stan Hoeppner wrote:
 
 This issue has come up twice on the Postfix list in less than a month.

Oh, thanks! I'll look into those list posts.. I had mostly given up
solving this by rate limits and decided to throw hardware at the problem 
when I saw the log entries for sender *.anpdm.com.. Seems to be a newsletter
sender, which I found as 203 different mailserver ip-addresses in our
incoming mailserver logs, from 53 different B-nets and 8 different A-nets.

Will give smtpd_client_connection_count_limit a try..


  -jf


Re: [Dovecot] migrate 15K users to new domain name

2012-04-21 Thread Jan-Frode Myklebust
On Thu, Apr 19, 2012 at 05:59:39PM +0300, Timo Sirainen wrote:
 
 With v2.1.4 you could do something like:
 
 doveadm -c dummy.conf user -m user@domain
 
 where dummy.conf contains the minimum configuration needed:
 
 mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln
 ssl = no
 

Thanks! Works perfect.


  -jf


[Dovecot] migrate 15K users to new domain name

2012-04-19 Thread Jan-Frode Myklebust
I need to migrate 15K users to a new domain name, and plan to use dsync
mirror in the transition phase. Could someone confirm that this should
work:

Before giving users access to new-domain do a first sync to get all the
stale data over:

for user in $old-domain; do
dsync mirror $user@old-domain $user@new-domain
done

Configure sieve vacation filter to forward all messages from
$user@old-domain to $user@new-domain, and notify sender of changed
address.

Give users access to both new-domain and old-domain, and do a final
sync.

for user in $old-domain; do
dsync mirror $user@old-domain $user@new-domain
dsync mirror $user@old-domain $user@new-domain  # twice in case 
the first was slow 
drop all messages for $user@old-domain
Leave notice message for $user@old-domain saying he should use 
new-domain
done

Wait a few weeks/months, and then drop all users@old-domain.


Does this look sensible? 


  -jf


Re: [Dovecot] migrate 15K users to new domain name

2012-04-19 Thread Jan-Frode Myklebust
On Thu, Apr 19, 2012 at 02:01:44PM +0300, Odhiambo Washington wrote:
 
 What do you mean by a new domain in this context?

The user's email addresses are changing from username@old.domain to
username@new-domain.

 Is the server changing?

No.

 Is the storage changing?

The user's home directory is based on the user's email address, so this
is changing.

 In my thinking, a domain change is as simple as using a rewrite rule in
 your MTA.

Also the user's login-names needs to change from old to new domain, and
all their data needs to move from old to new domain.


  -jf


Re: [Dovecot] migrate 15K users to new domain name

2012-04-19 Thread Jan-Frode Myklebust
On Thu, Apr 19, 2012 at 05:03:01PM +0300, Odhiambo Washington wrote:
 
 
 In my setup, I have virtual users. So the home directory is in the
 /var/spool/virtual/$domain/$user/mdbox
 
 How is yours setup?

mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln

 If the domain name changed, from domain1 to domain2, I
 believe it would be easy to change as follows:
 
 cd /var/spool/virtual/
 mv $domain1 $domain2

If I could figure out what the %256LRHu hash is, mv would probably be a
very good solution..

 
 And the login names are stored in a flatfile or db??

LDAP

 Either way, you can do a rename.

No, we need to keep the old username/password working, so that all users
will get notified of the changed -- even if they take off on a 6 month 
vacation the day before the change.

 
 Maybe I still don't understand you:-)

You seem to be understanding perfectly well. I've been looking myself
blind on dsync mirror, when a simple mv probably will work just as
well :-)


   -jf


Re: [Dovecot] Configuration advices for a 50000 mailboxes server(s)

2012-04-17 Thread Jan-Frode Myklebust
On Tue, Apr 17, 2012 at 08:54:15AM -0300, Mauricio López Riffo wrote:
 
 Here we have approx. 200K users with 4000 concurrent connections
 (90% POP3 users)

How do you measure concurrent POP3 users?

 All servers in virtual environment Vmware,
 supermicro servers and Netapp Metrocluster storage solutions (nfs
 storage with 10G ethernet network)  POP3 sessions take betwen 40 and
 300 milisecons at connect, auth and list.  All accounts lives in
 LDAP, CentOS 5 and exim like a mta relay.

Very interesting config. We're close to 1M accounts, GPFS cluster fs, LDAP,
RHEL5/6 and postfix + dovecot director for pop/imap/lmtp,  and moving from
maildir to mdbox. 

What mailbox-format are you using? Do you have a director, or accounts
sticky to a server some other way?

How's the NFS performance? I've always bean weary that NFS works
terribly with many small files (i.e. maildir)..

What does the metrocluster give you? Is it for disaster recovery on
second location, or do you have two active locations working against the
same filesystem?


  -jf


Re: [Dovecot] Configuration advices for a 50000 mailboxes server(s)

2012-04-17 Thread Jan-Frode Myklebust
On Tue, Apr 17, 2012 at 10:10:02AM -0300, Mauricio López Riffo wrote:
 
 1M = 1 milion ?

976508 to be exact :-) but it's very much a useless number. Lots and lots
of these are inactive. A better number is probably that we're seeing
about 80 logins/second for the last hour.. (just checked now, not sure
if this is the most busy hour or not).

 How many servers you have?  hardware?

7 backend dovecot servers (two IBM x336, three x346 and two x3550, with
a 8 GB for the x336/x346 and 16GB memory memory for the x3550's).
2 frontend dovecot directors (IBM x3550).

None of these are really very busy, so we could probably reduce the
number of backends a bit if we wanted. Our struggle is the number of
iops we're able to get from the backend storage (IBM DS4800), mostly
a problem when we have storms of incoming marketing messages in addition
to the pop/imap traffic.


  -jf


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-14 Thread Jan-Frode Myklebust
On Fri, Apr 13, 2012 at 07:33:19AM -0500, Stan Hoeppner wrote:
  
  What I meant wasn't the drive throwing uncorrectable read errors but
  the drives are returning different data that each think is correct or
  both may have sent the correct data but one of the set got corrupted
  on the fly. After reading the articles posted, maybe the correct term
  would be the controller receiving silently corrupted data, say due to
  bad cable on one.
 
 This simply can't happen.  What articles are you referring to?  If the
 author is stating what you say above, he simply doesn't know what he's
 talking about.

It has happened to me, with RAID5 not RAID1. It was a firmware bug
in the raid controller that caused the RAID array to go silently
corrupted. The HW reported everything green -- but the filesystem was
reporting lots of strange errors..  This LUN was part of a larger
filesystem striped over multiple LUNs, so parts of the fs was OK, while
other parts was corrupt.

It was this bug:

   
http://delivery04.dhe.ibm.com/sar/CMA/SDA/02igj/7/ibm_fw1_ds4kfc_07605200_anyos_anycpu.chg
   - Fix 432525 - CR139339  Data corruption found on drive after
 reconstruct from GHSP (Global Hot Spare)


snip

 In closing, I'll simply say this:  If hardware, whether a mobo-down SATA
 chip, or a $100K SGI SAN RAID controller, allowed silent data corruption
 or transmission to occur, there would be no storage industry, and we'll
 all still be using pen and paper.  The questions you're asking were
 solved by hardware and software engineers decades ago.  You're fretting
 and asking about things that were solved decades ago.

Look at the plans are for your favorite fs:

http://www.youtube.com/watch?v=FegjLbCnoBw

They're planning on doing metadata checksumming to be sure they don't
receive corrupted metadata from the backend storage, and say that data
validation is a storage subsystem *or* application problem. 

Hardly a solved problem..


  -jf


Re: [Dovecot] Bug tracker

2012-04-11 Thread Jan-Frode Myklebust
On Wed, Apr 11, 2012 at 09:26:20AM +0300, Timo Sirainen wrote:
 
 So, any suggestions for what software could do these things? I think Request 
 Tracker has those features, but it's not really the nicest/prettiest thing.
 

I didn't see open source as a requirement, so then I would give a plug
for Jira, which is the nicest/prettiest thing :-) And they provide free
hosted solution:

http://www.atlassian.com/software/jira/pricing

Apache/ASF is a heavy jira user, in case you're not familiar with it:

http://wiki.apache.org/general/ApacheJira
https://issues.apache.org/jira/


  -jf


Re: [Dovecot] Bug tracker

2012-04-11 Thread Jan-Frode Myklebust
On Wed, Apr 11, 2012 at 09:49:18AM +0300, Timo Sirainen wrote:
  
  I didn't see open source as a requirement, so then I would give a plug
  for Jira, which is the nicest/prettiest thing :-)
 
 I don't think it supports one of my requirements:
 
  I would have the option of adding a comment that doesn't go to the mailing 
  list
 
 Unless that's been added in a newer version.
 

There is an option for restricting who can view your comment, plus
Email notifications will only be sent to people who have permission to
view the relevant issue


http://confluence.atlassian.com/display/JIRA/Creating+a+Notification+Scheme

so I would expect it to be possible to define that the mailinglist is
not member of a group-b, while everyone else is, and restrict the comment to 
that group.

But best would probably be to discuss it with atlassion support...


  -jf


[Dovecot] doveadm purge on clusterfs

2012-03-27 Thread Jan-Frode Myklebust
Since doveadm service proxying apparently doesn't work with dovecot
v2.0, we need to find a way to safely run doveadm purge on the host the
user is logged into.

Would it be OK to run purge in the pop/imap postlogin scripts? We
already do a conditional:

test /var/log/activemailaccounts/imap/$USER -ot 
/var/log/activemailaccounts/today
then
touch /var/log/activemailaccounts/imap/$USER
fi

so adding a:

doveadm purge -u $USER

in this section would make it run once every day the users that log in.
Does that sound like an OK solution?


   -jf


Re: [Dovecot] dsync redesign

2012-03-24 Thread Jan-Frode Myklebust
On Sat, Mar 24, 2012 at 08:19:48AM +0100, Attila Nagy wrote:
 On 03/23/12 22:25, Timo Sirainen wrote:
 
 Well, dsync is a very useful tool, but with continuous replication
 it tries to solve a problem which should be handled -at least
 partially- elsewhere. Storing stuff in plain file systems and
 duplicating them to another one just doesn't scale.

I don't see why this shouldn't scale. Mailboxes are after all changed
relatively infrequently. One idea for making it more scalable might be
to treat indexes/metadata and messages differently. Make index/metadata
updates synchronous over the clusters/locations (with re-sync capability
in case of lost synchronisation), while messages are store in one
altstorage per cluster/location.

For a two-location solution, message-data should be stored in:

mail_location = mdbox:~/mdbox
ALTcache=mdbox:~/mdbox-remoteip-cache
ALT=dfetch://remoteip/   -- new protocol

If a message is in the index, look for it in that order:

local mdbox
ALTcache
ALT

if it finds the message in ALT, make a copy into ALTcache (or local
mdbox?).

Syncronizing messages could be a very low frequency job, and could be
handled by simple rsync of ALT to ALTcache. No need for specialized tool
for this job. Syncronizing ALTcache to local mdbox could be done with a
reversed doveadm-altmove, but might not be necessary.

Of course this is probably all very naive.. but you get the idea :-)


   -jf


Re: [Dovecot] Creating an IMAP repo for ~100 users need some advice

2012-03-18 Thread Jan-Frode Myklebust
On Sun, Mar 18, 2012 at 11:36:25AM -0400, Charles Marcus wrote:
 
 Hmmm... wonder if there would be a way to add some kind of 'dummy'
 first message that dovecot would simply ignore (not show to the
 user), that would prevent that bevaior?

That's what uw-imap does. It creates a message with the subject
DON'T DELETE THIS MESSAGE -- FOLDER INTERNAL DATA, which is very
annoying if your users has direct access to the mbox's...

http://www.washington.edu/imap/IMAP-FAQs/index.html#6.14


  -jf


Re: [Dovecot] OT: Distrowars - WAS: Re: seeking advice: dovecot versions; mailbox formats.

2012-03-09 Thread Jan-Frode Myklebust
On Thu, Mar 08, 2012 at 11:04:14AM -0500, Charles Marcus wrote:
 
 As for what mailbox format, there is no more 'dbox', it is either
 sdbox (like mbox one file per folder) or mdbox (multiple files per
 folder) -

Sdbox is like maildir, one message per file, while mdbox is more
like mbox:

http://wiki2.dovecot.org/MailboxFormat/dbox


 that said, mdbox seems to be the best general purpose, but
 my understanding is it can complicate things if something goes
 wrong, but it seems to be very solid.

It's a leap of faith to go with dovecot's own format, and no longer be
able to use grep and mutt to poke in mail folders directly, but as a
serverside storage format it seems like the right way to go.


  -jf


Re: [Dovecot] dsync replication available for testing

2012-03-05 Thread Jan-Frode Myklebust
On Sun, Mar 04, 2012 at 01:38:14PM +0200, Timo Sirainen wrote:
  
  Great news. I would love to test it, if I will be able to run this on a 
  test 
  account, only. All other users should become synced the old way for the 
  time 
  being. 
  
  Would that be possible with the current implementation?
 
 1) Replicator syncs all users at startup. If you can change your userdb 
 iteration to return only one test user for replicator that avoids it. (You 
 may be able to do protocol replicator { userdb {..} } and protocol 
 !replicator { .. })

IMHO it would be great if it didn't sync all users. We probably av have
hundreds of thousands of inactive users that we would like to sync at a
later point. Also when we provision users that's just an entry in a
LDAP-directory without any files or directories. So dovecot shouldn't
create any directories for these before they've received mail or logged in.

So, ideally (for us), dovecot should keep a log over which accounts are
active (has received or checked mail), and only sync users that has been
active for the last $timeperiode on startup.


  -jf


Re: [Dovecot] dsync replication available for testing

2012-03-05 Thread Jan-Frode Myklebust
On Mon, Mar 05, 2012 at 12:45:26PM +0200, Timo Sirainen wrote:
  
  So, ideally (for us), dovecot should keep a log over which accounts are
  active (has received or checked mail), and only sync users that has been
  active for the last $timeperiode on startup.
 
 Well, all of this could be done already, although not very automatically.. 
 Whenever a new mail is delivered or user is logged in, the user's last-login 
 timestamp in SQL could be updated. And replicator's userdb iterate_query 
 could return only users whose last-login timestamp is new enough. The SQL 
 userdb could be used only by replicator, everything else could keep using 
 LDAP.
 

.. or we could keep touching /activemailaccounts/$address in post-login
scripts, and run doveadm sync for any user updated the last $timeperiode 
and avoid the need for SQL-userdatabase. But we still don't have a
last-login update on lmtp delivery... or has this changed?


  -jf


[Dovecot] fts size

2012-02-28 Thread Jan-Frode Myklebust
Does anybody have any numbers for how large storage one will need for
the fts indexing server? I see the wiki says 30% of mailbox size for
Squat (partial=4 full=4). Is it similar for lucene/solr?

Do I understand correctly if I think 
http://wiki2.dovecot.org/Plugins/FTS/Lucene 
will create an index for each user in his home directory? Will this be
accounted for in the users' quota?


  -jf


Re: [Dovecot] Multiple locations, 2 servers - planning questions...

2012-02-27 Thread Jan-Frode Myklebust
On Mon, Feb 27, 2012 at 02:51:54PM -0600, l...@airstreamcomm.net wrote:

  You could also look at GPFS
 (http://www-03.ibm.com/systems/software/gpfs/), which is not open source
 but it's apparently rock solid and I believe supports multisite clustering.

GPFS supports different modes of clustering. I think the appropriate
solution here would be to deploy a single cluster spanning 3 sites (3. site
is needed for quorum node, two sites can't work because you can't protect
it from split brain). The simplest config would then be 3 nodes (but you
could have any number of nodes at each site):

quorum node1 on site1 with a local disk (or local SAN-disk) as Network 
Shared Disk (NSD)
quorum node2 on site2 with a local disk (or local SAN-disk) as Network 
Shared Disk (NSD)
quorum node3 on site3

The filesystem would be replicated (over IP) between the disk on site1 and
site2.  Should one site go down, the other site would survive as long as
it could still see the quorum node on site3. After a site has been down,
one would need to sync up the NSDs (mmrestripefs) to re-establish the
replication of any blocks that has been changed while it was down.


   -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-22 Thread Jan-Frode Myklebust
On Tue, Feb 21, 2012 at 02:33:24PM +, Ed W wrote:
 
 I think the original question was still sensible.  In your case it
 seems like the ping times are identical between:
 webmail - imap-proxy
 webmail - imap server
 
 I think your results show that a proxy has little (or negative)
 benefit in this situation, but it seems feasible that a proxy could
 eliminate several RTT trips in the event that the proxy is closer
 than the imap server?  This might happen if say the imap server is
 in a different datacenter (webmail on an office server machine?)

The webmail/imapproxy were actually running in a different datacenter to
the dovecot director/backend servers, but only about 20KM away.

Ping tests:

webmail-director:

rtt min/avg/max/mdev = 0.933/1.061/2.034/0.183 ms

director-backend:

rtt min/avg/max/mdev = 0.104/0.108/0.127/0.005 ms

webmail-localhost:

rtt min/avg/max/mdev = 0.020/0.062/1.866/0.257 ms


  -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-22 Thread Jan-Frode Myklebust
On Wed, Feb 22, 2012 at 09:31:55AM +, Ed W wrote:
 
 It seems intuitive that the proxy installed locally could save you
 2x RTT increment, which is about 0.8ms in your case.  So I might
 expect the proxy to reduce rendering times by around 1.6ms simply
 because it reduces the number of round trips to login?  Kind of
 curious why that's not achieved..?

Each http-request can probably trigger several IMAP requests. Maybe 
these work better in parallel directly to dovecot, than serialized (?) 
trough the imapproxy ? No idea if that's what's happening... or maybe
the imapproxy just adds more overhead than the 2xRTT + imap logins it's
supposed to save us ?


  -jf


Re: [Dovecot] Homedir vs locations vs mail_location?

2012-02-20 Thread Jan-Frode Myklebust
On Mon, Feb 20, 2012 at 09:57:15AM +0300, Alexander Chekalin wrote:
 
 1. The homedir value points to the place where everything for the
 user stored at, while mail_location is something (some place) where
 mail stored at. if I deal with pure virtual users (all users are in
 sql tables and no system homes for them at all), should I ever care
 for returning meaningful value for 'homedir' (via password_query's
 userdb_home), or I can simple return empty or constant ('' or '123')
 for it and it won't mess anything?

Dovecot will store non-mailfiles in the homedir. F.ex. quota-files, 
sieve scripts, subscription file, .dovecot-lda.dupes, and probably more.
So do yourself a favour and create a real homedir for each user :-)

http://wiki2.dovecot.org/VirtualUsers/Home

 
 2. If I use single (default) namespace, should I set namespace's
 location (to the same value as global mail_location), and should I
 expect anything strange if I skip it to set? Reversely, is it
 possible not to set global mail_location and set only namespace's
 location (which would be more logical as namespace definition is
 compact and easy to find in config)?
 

We have a single namespace, with blank location:

namespace {
hidden = no
inbox = yes
list = yes
location = 
prefix = INBOX.
separator = .
subscriptions = yes
type = private
}

But I don't really know the purpose of this location field vs.
mail_location.


  -jf


Re: [Dovecot] Any possibility of running query after sucessful login?

2012-02-16 Thread Jan-Frode Myklebust
On Thu, Feb 16, 2012 at 09:41:52AM +0100, Jacek Osiecki wrote:
 
 I was just wondering if there is any possibility of running another
 query after successful login - just to fill some extra field like
 last_login? 

We touch a file in /var/log/activemailaccounts/$username on every
successful login trough postlogin scripting:

http://wiki2.dovecot.org/PostLoginScripting


  -jf


Re: [Dovecot] Any possibility of running query after sucessful login?

2012-02-16 Thread Jan-Frode Myklebust
On Thu, Feb 16, 2012 at 12:18:15PM +0100, Jacek Osiecki wrote:
 
 By the way, is such thing possible for other processess? For
 example, I'd like to set in mysql table information that mail has
 been delivered using lmtp. Would something like this work?
 
 protocol lmtp {
   mail_plugins = $mail_plugins sieve
   executable = lmtp postlmtp
 }

I've been inquiring the same lately, and unfortunately that's not
possible. There's no login involved with lmtp, and each lmtp-session
can have multiple recipients.. Maybe it can be solved trough a global
sieve script?


  -jf


Re: [Dovecot] pop3 not autocreating directory structure

2012-02-15 Thread Jan-Frode Myklebust
On Wed, Feb 15, 2012 at 03:49:21AM +0200, Timo Sirainen wrote:
  
  Looking at the timestamps in the filesystem I see that the users home
  directory wasn't created before switcing to imap.
  
  Is this a know problem? 
 
 Probably again a bug in your specific Dovecot version. :) I remember doing 
 fixes related to this (not entirely sure if it was only for v2.1).
 

Is it maybe changeset 11683:148fccbe9f32 you remeber:

- - maildir: sometimes rm -rf Maildir;imaptest logout=0 gives
-   Error: Opening INBOX failed: Mailbox doesn't exist: INBOX

This was just the updated to the todo-list, but I can't see what the fix
was. Also, if it was just occationally failing, it might not be that
critical.. So far it's only happended for one user for the last 36
hours, so either it's only occationally failing, or the other new users
are visiting webmail/imap before pop.


  -jf


[Dovecot] pop3 not autocreating directory structure

2012-02-14 Thread Jan-Frode Myklebust
We use:

mail_home = /srv/mailstore/%256LRHu/%Ld/%Ln
mail: mdbox:~/mdbox

and I just noticed one of our newly provisioned users initially failed
to pop her mails. I saw several of these:

dovecot::  pop3(new.u...@example.net): Error: Couldn't open INBOX: 
Mailbox doesn't exist: INBOX
dovecot::  pop3(new.u...@example.net): Couldn't open INBOX top=0/0, 
retr=0/0, del=0/0, size=0

before she switched to imap and then everything looked fine:

dovecot::  imap(new.u...@example.net): Disconnected: Logged out 
bytes=11/338

Looking at the timestamps in the filesystem I see that the users home
directory wasn't created before switcing to imap.

Is this a know problem? 


  -jf


[Dovecot] doveadm director proxy

2012-02-14 Thread Jan-Frode Myklebust
I'm trying to configure a doveadm service that will proxy trough our
directors, following the recipie at:

http://wiki2.dovecot.org/Director#Doveadm_server

So on the backends I have:

service doveadm {
inet_listener {
port = 24245
address = *
}
}
doveadm_proxy_port = 24245
local 192.168.42.0/24 {
doveadm_password = suPerSeecret
}

I assume the local line is supposed to point at my local network..?

On the directors I have the same, plus:

protocol doveadm {
auth_socket_path = director-userdb
}

When testing doveadm quota on the directors, it complained
quota plugin not being loaded, so I added:

mail_plugins=quota

Then it complained about doveadm_password not set, can't authenticate,
so I added:

doveadm_password = suPerSeecret

in the main section. Now I get trough to my backend servers, but they
complain about:

dovecot::  doveadm: Error: doveadm client attempted non-PLAIN 
authentication

Any ideas for what that might be? This is with dovecot v2.0.14.


  -jf


[Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Jan-Frode Myklebust
We've been collecting some stats to see what kind of benefits
UP/SquirrelMail's IMAP Proxy in for our SOGo webmail users. Dovecot is
running in High-performance mode http://wiki2.dovecot.org/LoginProcess
with authentication caching http://wiki2.dovecot.org/Authentication/Caching

During the weekend two servers (webmail3 and webmail4) has been running
with local imapproxy and two servers without (webmail1 and webmail2). Each
server has served about 1 million http requests, over 3 days. 

server  avg. response time  # requests

webmail1.example.net   0.3704111092386
webmail2.example.net   0.3742271045141
webmail3.example.net   0.3780971043919  imapproxy
webmail4.example.net   0.3785931028653  imapproxy


ONLY requests that took more than 5 seconds to process:

server  avg. response time  # requests

webmail1.example.net   26.048  1125
webmail2.example.net   26.2997 1080
webmail3.example.net   28.5596 808  imapproxy
webmail4.example.net   27.1004 964  imapproxy

ONLY requests that took more than 10 seconds to process:

server  avg. response time  # requests

webmail1.example.net   49.1407 516
webmail2.example.net   53.0139 459
webmail3.example.net   59.7906 333  imapproxy
webmail4.example.net   58.167  384  imapproxy

The responstimes are not very fast, but they do seem to support
the claim that an imapproxy isn't needed for dovecot.


  -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Jan-Frode Myklebust
On Mon, Feb 13, 2012 at 04:14:22PM +0200, Timo Sirainen wrote:
  The responstimes are not very fast, but they do seem to support
  the claim that an imapproxy isn't needed for dovecot.
 
 That's what I always suspected, but good to have someone actually test it. :) 
 This is with Maildir?

Yes, this is maildirs (on GPFS).

 
 Other things that would be interesting to try out (both from latency and disk 
 IO usage point of view):
 
  - maildir_very_dirty_syncs

We already have

$ doveconf maildir_very_dirty_syncs
maildir_very_dirty_syncs = yes

but I don't think this gave the advantage I was expecting.. Was
expecting this to move most iops to the index-luns, but the maildir
luns seems just as busy.

  - mail_prefetch_count (Linux+maildir only, v2.1+)

Will look into if this works with GPFS when we upgrade to v2.1. It has
it's own page cache, so I have no idea if it will respect
POSIX_FADV_WILLNEED or if one will need to use it's own API's for
hinting:


http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mlacrge.html


  -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Jan-Frode Myklebust
On Mon, Feb 13, 2012 at 11:08:48AM -0800, Mark Moseley wrote:
 
 Out of curiosity, are you running dovecot locally on those webmail
 servers as well, or is it talking to remote dovecot servers?

The webmail servers are talking with dovecot director servers which in
turn are talking with the backend dovecot servers. Each service running
on different servers.

Webmail-servers - director-servers - backend-servers

 I ask because I'm looking at moving our webmail from an on-box setup to a
 remote pool to support director and was going to look into whether
 running imapproxyd would help there. We don't bother with it in the
 local setup, since dovecot is so fast, but remote (but still on a LAN)
 might be different.

Doesn't seem so to us...

 Though imapproxyd seems to make (wait for it...)
 squirrelmail unhappy (complains about IMAP errors, when sniffing shows
 none), though I've not bothered to debug it yet.

:-)


  -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Jan-Frode Myklebust
On Mon, Feb 13, 2012 at 09:57:31PM +0200, Timo Sirainen wrote:
  
  $ doveconf maildir_very_dirty_syncs
  maildir_very_dirty_syncs = yes
  
  but I don't think this gave the advantage I was expecting.. Was
  expecting this to move most iops to the index-luns, but the maildir
  luns seems just as busy.
 
 This setting should get rid of almost all readdir() calls. If it doesn't, 
 something's not working right.


With maildir_very_dirty_syncs = yes:

ReadMB/s  WriteMB/s F_open  f_close reads   writes  rdirinode
  1.5 0.0   96  92  514 73  9   7
  1.2 0.0   59  43  367 18  4   76
  1.7 0.0   66  61  477 67  2   6
  1.2 0.0   54  50  348 31  1   145
  3.0 0.0   113 90  860 59  7   8
  2.9 0.0   107 99  840 58  5   11
  4.0 0.0   131 101 111777  2   65

With maildir_very_dirty_syncs = no (same node, a few seconds later):

ReadMB/s  WriteMB/s F_open  f_close reads   writes  rdirinode
  4.6 0.9   125 91  1161109641  6
  2.3 0.7   200 170 697 127 86  16
  1.1 0.6   124 99  406 61  48  109
  2.7 0.1   212 144 755 114 74  15
  2.7 0.0   159 133 818 70  78  194
  0.8 1.2   86  73  225 60  16  9
  1.9 0.0   124 116 573 53  30  6

So it seems to be working, good :-)


  -jf


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Jan-Frode Myklebust
On Mon, Feb 13, 2012 at 01:24:25PM -0700, Michael M Slusarz wrote:
 
 Except you are most likely NOT leveraging the truly interesting part
 of imapproxy - the ability to restore the IMAP connection state via
 the XPROXYREUSE status response.  This is a significant performance
 improvement since it also reduces processing load on the client side
 (everything before/including authentication needs to be done whether
 using imapproxy or not, so there is no client-side savings for these
 commands).

Thanks for this info, good to know. I'll check with inverse/sogo if this
is something they use/intend to use..

 
 additional advantage. Note that the XPROXYREUSE-enabled MUA must be
 the exclusive user of the imapproxy instance for this feature to
 work correctly.

Not a problem. Assuming it doesn't also need to be the only imap user of
the account/folder.

BTW: do you also have information on the state of select caching in the
up-imapproxy? I got some very negative comments when googling it, and the
changelog didn't suggest there had been any improvements since..


  -jf


Re: [Dovecot] Issues with SIS and Backups - was Re: v2.1.0 status

2012-02-12 Thread Jan-Frode Myklebust
On Sun, Feb 12, 2012 at 05:58:20PM +0200, Timo Sirainen wrote:
 
 doveadm backup -u user@domain backup:
 
 And it would output the user's messages to stdout (or to some file). So it 
 would be similar to e.g. PostgreSQL's pg_dump.

So only full backups, no incremental backups? Then what's the benefit
over just copying the files (of a snapshot)?


  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-02-09 Thread Jan-Frode Myklebust
On Thu, Feb 09, 2012 at 01:48:09AM +0200, Timo Sirainen wrote:
 On 7.2.2012, at 10.25, Jan-Frode Myklebust wrote:
 
  Feb  6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Panic: file 
  lmtp-proxy.c: line 376 (lmtp_proxy_output_timeout): assertion failed: 
  (proxy-data_input-eof)
 ..
  Should I try increasing LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS, or do you have 
  any
  other ideas for what might be causing it ?
 
 
 The backend server didn't reply within LMTP_PROXY_DEFAULT_TIMEOUT_MSECS
 (30 secs).

It's actually 60 sec in v2.0


http://hg.dovecot.org/dovecot-2.0/file/750db4b4c7d3/src/lmtp/lmtp-proxy.c#l13

 It still shouldn't have crashed of course, and that crash is already fixed in 
 v2.1
 (in the LMTP simplification change). 

Do you think we should rather run v2.1-rc* on our dovecot directors
(for IMAP, POP3 and LMTP), even if we keep the backend servers on v2.0 ?

 Anyway, you can fix this without recompiling by returning e.g. 
 proxy_timeout=60 passdb extra field for 60 secs timeout.

Thanks, well consider that option if it crashes too often... Have only
seen this problem once for the last week.


  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-02-07 Thread Jan-Frode Myklebust
On Mon, Feb 06, 2012 at 10:01:03PM +0100, Jan-Frode Myklebust wrote:
  Your fsyncs can run over 60 seconds?
 
 Hopefully not.. maybe just me being confused by the error message about
 lmtp_proxy_output_timeout. After adding
 http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c on friday, we haven't
 seen any problems so it looks like this problem is solved.

Crap, saw 6 message might be sent more than once messages from postfix 
yesterday,
all at the time of this crash on the director postfix/lmtp was talking with:

Feb  6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Panic: file 
lmtp-proxy.c: line 376 (lmtp_proxy_output_timeout): assertion failed: 
(proxy-data_input-eof)
Feb  6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Error: Raw 
backtrace: /usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193d680] - 
/usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193d6d6] - 
/usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193cb93] - dovecot/lmtp [0x406d75] 
- /usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0xcd) 
[0x2ab6f194859d] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x68) [0x2ab6f1949558] 
- /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x2d) [0x2ab6f194820d] - 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x2ab6f1936a83] - 
dovecot/lmtp(main+0x144) [0x403fa4] - /lib64/libc.so.6(__libc_start_main+0xf4) 
[0x35f8a1d994] - dovecot/lmtp [0x403da9]
Feb  6 16:13:10 loadbalancer2 dovecot: master: Error: service(lmtp): 
child 6601 killed with signal 6 (core dumps disabled)


Should I try increasing LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS, or do you have any
other ideas for what might be causing it ?


  -jf


[Dovecot] doveadm purge on shared storage

2012-02-06 Thread Jan-Frode Myklebust
We've finally (!) started to put some users on mdbox instead of maildir,
and now I'm wondering about the purge step. As we're running GPFS for the
mailboxes (and dovecot director in front of every dovecot service), is
it important to run the doveadm purge -u $user on the same host as 
$user is logged into to avoid index corruption ?

If so, will we need to run the doveadm purge trough the dovecot director as
well?


  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-02-06 Thread Jan-Frode Myklebust
On Mon, Feb 06, 2012 at 10:29:03PM +0200, Timo Sirainen wrote:
 On 3.2.2012, at 14.25, Jan-Frode Myklebust wrote:
 
  I now implemented this patch on our directors, and pointed postfix at them.
  No problem seen so far, but I'm still a bit uncertain about the
  LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS. I know we're experienceing quite
  large delays when fsync'ing (slow IMAP APPEND). Do you think increasing
  LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS is a sensible workaround if we start
  seeing lmtp_proxy_output_timeout problems again ?
 
 Your fsyncs can run over 60 seconds?

Hopefully not.. maybe just me being confused by the error message about
lmtp_proxy_output_timeout. After adding
http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c on friday, we haven't
seen any problems so it looks like this problem is solved.

But it doesn't seem unthinkable that ext3 users might see more than 60s
for fsyncs... Some stalls on the order of minutes have been reported
ref: https://lwn.net/Articles/328363/

 I think even if you increase Dovecot's timeout you'll soon reach your MTA's 
 LMTP timeout.
 

My MTA's default is 10 minutes..

http://www.postfix.org/postconf.5.html#lmtp_data_done_timeout



  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-02-03 Thread Jan-Frode Myklebust
On Wed, Jan 18, 2012 at 09:03:18PM +0200, Timo Sirainen wrote:
  
  I think the way I originally planned LMTP proxying to work is simply too
  complex to work reliably, perhaps even if the code was bug-free. So
  instead of reading+writing DATA at the same time, this patch changes the
  DATA to be first read into memory or temp file, and then from there read
  and sent to the LMTP backends:
  
  http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26
  
  888-8-8-88-888--
  
  unfortunately I haven't tested that patch, so I have no idea if it 
  fixed the issues or not...
 
 I'm not sure if that patch is useful or not. The important patch to fix it is 
 http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c

I now implemented this patch on our directors, and pointed postfix at them.
No problem seen so far, but I'm still a bit uncertain about the
LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS. I know we're experienceing quite
large delays when fsync'ing (slow IMAP APPEND). Do you think increasing
LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS is a sensible workaround if we start
seeing lmtp_proxy_output_timeout problems again ?


  -jf


Re: [Dovecot] Doubts about dsync, mdbox, SIS

2012-02-03 Thread Jan-Frode Myklebust
On Thu, Feb 02, 2012 at 02:41:11PM +0200, Timo Sirainen wrote:
 
  That is most likely related to your troubles. If the dsync runs crash,
  the result could leave extra files lying around etc..
  
  If dsync backup is supposed to be a viable backup solution, I think it
  should fail much better. If it see errors on the target side it should
  clear the target and do a full sync. Manually cleaning up after it's
  problems is too much work.
 
 Of course. But if no one gives me enough information to reproduce problems, I 
 can't really fix anything. I don't really have time to spend guessing ways to 
 make it break. I've been using dsync to backup my own mails for over a year, 
 with zero problems.

I'm reducing the complexity now, removing SIS and starting the backups
from scratch again. I'll start posting the problems I see over the
weekend..

 
Error: Mailboxes don't have unique GUIDs: 
  08b46439069d3d4db049e671bf84 is shared by INBOX and INBOX
 
 What about:
 
 doveadm mailbox status -u user@domain guid '*'
 
 in source server?

INBOX   guid=08b46439069d3d4db049e671bf84
INBOX.Sent  guid=e8f6e431bf6e014f2d78e671bf84
INBOX.Trash guid=c858f2234a1d5d4e154758d3d19f
INBOX.Draftsguid=e9f6e431bf6e014f2d78e671bf84
INBOX.Spam  guid=eaf6e431bf6e014f2d78e671bf84
INBOX.Sent Messages guid=d837512bed7d674e685c58d3d19f
INBOX.INBOX.Sent Messages guid=ebf6e431bf6e014f2d78e671bf84
INBOX.Notes guid=c0d2250109645e4eed5c58d3d19f

 in dest server? Does one list show two INBOXes or otherwise duplicate GUIDs? 
 Perhaps this was a bug in v2.0.14..

Scratched dest server before I replied.. sorry. 


 
Error: Failed to sync mailbox INBOX.ferie 2006.: Invalid mailbox name
  
  Is this a namespace prefix? It shouldn't be trying to sync a mailbox
  named this (there's an extra . suffix).
  
  I believe it's a folder named INBOX.ferie 2006., with the user using
  the namespace separator in the folder name..  I believe dovecot allows
  this, so it should also handle backing it up.
 
 It has never been possible to create such folder via Dovecot. IMAP protocol 
 itself prevents that. CREATE foo. will end up creating foo, not foo. If 
 you manually mkdir that, it's not possible to access the mailbox in any way 
 via Dovecot. Everything will simply fail as:

Oh, sorry.. then this is a problem created by @mail, which poked
directly in the filesystem. Guess we'll have to clean these up manually.


  -jf


Re: [Dovecot] Doubts about dsync, mdbox, SIS

2012-02-02 Thread Jan-Frode Myklebust
On Thu, Feb 02, 2012 at 12:23:01PM +0200, Timo Sirainen wrote:
 
 Note that with SIS the attachments aren't compressed.

Yes, I know. 

 
  /srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-057274283bb51f4f917ebf34f6ab
   is
  missing, but there are 205 other copies of this file named
  /srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-*
   with
  identical sha1sum.
 
 All of them have a link count of 2, with the other link being in hashes/
 directory?

No, these has link count=207. I don't know what you mean by link being
in hashes directory.

# ls -l 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-*|head
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-0069222e0c080f4f754abf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-00ffb9312a370e4f6b61bf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-0442c5163ad3114fb478bf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-04f288390052144f012dbf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-053b6c0f185a0d4fc421bf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-06c98213c3b30e4fac3cbf34f6ab
-rw--- 207 mailbackup mailbackup 149265 Jan  9 23:31 
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-076573234fbd0b4fa862bf34f6ab

This is just one example, I can provide tons of other examples.. Hmm,
I see now that there are 206 files of that first example with the 207
links, and here's another other example with numlinks=7:

# ls -l  
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-*|wc 
-l
206

and numlinks=4:

# ls -al 
/srv/mailbackup/attachments/c3/1b/c31beb42ef78810f7fb81a7086144034fb0fd794*|wc 
-l
3

is dovecot somehow creating numlinks+1 copies of every file it
hardlinks?? Would explain my diskusage :-)



 That is most likely related to your troubles. If the dsync runs crash,
 the result could leave extra files lying around etc..

If dsync backup is supposed to be a viable backup solution, I think it
should fail much better. If it see errors on the target side it should
clear the target and do a full sync. Manually cleaning up after it's
problems is too much work.

 
  Some samples:
  
  Error: Mailboxes don't have unique GUIDs: 
  08b46439069d3d4db049e671bf84 is shared by INBOX and INBOX
 
 This is a little bit strange. What is the doveconf -n output of the
 source server?


# 2.0.14: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.26.1.el5 x86_64 Red Hat Enterprise Linux Server
# release 5.5 (Tikanga) 
auth_cache_size = 100 M
auth_verbose = yes
auth_verbose_passwords = sha1
disable_plaintext_auth = no
login_trusted_networks = 192.168.0.0/16
mail_gid = 3000
mail_home = /srv/mailstore/%256RHu/%d/%n
mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u
mail_max_userip_connections = 20
mail_plugins = quota zlib
mail_uid = 3000
maildir_stat_dirs = yes
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date
mmap_disable = yes
namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  quota = dict:UserQuota::file:%h/dovecot-quota
  sieve = /sieve/%1u/%1.1u/%u/.dovecot.sieve
  sieve_dir = /sieve/%1u/%1.1u/%u
  sieve_max_script_size = 1M
  zlib_save = gz
  zlib_save_level = 6
}
postmaster_address = postmas...@example.net
protocols = imap pop3 lmtp sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  client_limit = 4521
  unix_listener auth-userdb {
group = 
mode = 0600
user = atmail
  }
}
service imap-login {
  inet_listener imap {
address = *
port = 143
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 1 G
}
service imap-postlogin {
  executable = script-login /usr/local/sbin/imap-postlogin.sh
}
service imap {
  executable = imap imap-postlogin
  process_limit = 2048
}
service lmtp {
  client_limit = 1
  inet_listener lmtp {
address = *
port = 24
  }
  process_limit = 25
}
service managesieve-login {
  inet_listener sieve {
address = *
port = 4190
  }

Re: [Dovecot] Doubts about dsync, mdbox, SIS

2012-02-02 Thread Jan-Frode Myklebust
On Thu, Feb 02, 2012 at 12:31:20PM +0100, Jan-Frode Myklebust wrote:
 
 and numlinks=4:
 
   # ls -al 
 /srv/mailbackup/attachments/c3/1b/c31beb42ef78810f7fb81a7086144034fb0fd794*|wc
  -l
   3
 
 is dovecot somehow creating numlinks+1 copies of every file it
 hardlinks?? Would explain my diskusage :-)
 

Sorry, brainfart.. Yes, these are hardlinks to the same inode..


# ls -i  c31beb42ef78810f7fb81a7086144034fb0fd794* 
../c31beb42ef78810f7fb81a7086144034fb0fd794*
2422693 c31beb42ef78810f7fb81a7086144034fb0fd794
2422693 
../c31beb42ef78810f7fb81a7086144034fb0fd794-13b405342e24284f6153bf34f6ab
2422693 
../c31beb42ef78810f7fb81a7086144034fb0fd794-1cb405342e24284f6153bf34f6ab
2422693 
../c31beb42ef78810f7fb81a7086144034fb0fd794-4eb405342e24284f6153bf34f6ab


  -jf


Re: [Dovecot] Question about quota configuration

2012-02-02 Thread Jan-Frode Myklebust
On Thu, Feb 02, 2012 at 11:58:12PM +0100, przemek.orzechow...@makolab.pl wrote:
 
 What im tryin to do now is to modify postfix-procmail-dovecot config 
 in a way that if user is over quota mail delivery is delayed instead of
 bouncing. 
 (is this possible?)

Check the quota_full_tempfail setting,

http://wiki.dovecot.org/MainConfig

 
 Second thing i would like to achive is that when authenticated users close
 to quota/group quota for example 10MB to quota limit 
 try sending email theyr mail is rejected and preferably an email is
 generated telling them to free some space for new mails first.
 (is such a thing possible?)

Check Quota warnings at http://wiki.dovecot.org/Quota/1.1


  -jf


[Dovecot] Doubts about dsync, mdbox, SIS

2012-02-01 Thread Jan-Frode Myklebust
I've been running continous dsync backups of our Maildirs for a few
weeks now, with the destination dsync server using mdbox and SIS. The
idea was that the destination server would act as a warm copy of 
all our active users data.

The active servers are using Maildir, and has:

$ df -h /usr/local/atmail/users/
FilesystemSize  Used Avail Use% Mounted on
/dev/atmailusers   14T   12T  2.2T  85% /usr/local/atmail/users
$ df -hi /usr/local/atmail/users/
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/atmailusers145M113M 33M   78% 
/usr/local/atmail/users

very little of this is compressed (zlib plugin enabled during christmas).

I'm surprised that the destination server is so large, was expecting zlib and
mdbox and SIS would compress it down to much less than what we're seeing 
(12TB - 5TB):

$ df -h /srv/mailbackup
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/mailbackupvg-mailbackuplv
  5.7T  4.8T  882G  85% /srv/mailbackup

Lots and lots of the attachement storage is duplicated into identical files,
instead of hard linked.

When running doveadm purge -u $user, we're seeing lots of 

Error: 
unlink(/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-057274283bb51f4f917ebf34f6ab)
 failed: No such file or directory

/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-057274283bb51f4f917ebf34f6ab
 is
missing, but there are 205 other copies of this file named
/srv/mailbackup/attachments/c3/17/c317b32b97688c16859956f11b803e3bba434349-* 
with
identical sha1sum.

Also we see corrupted indexes during the purge. This makes me quite uncertain
if dsync is a workable backup solution.. or if we can trust mdboxes.  

Also on the source side, during dsync, we see too many problems. Some samples:

Error: Mailboxes don't have unique GUIDs: 
08b46439069d3d4db049e671bf84 is shared by INBOX and INBOX
Error: command BOX-LIST failed
Error: Worker server's mailbox iteration failed
Error: read() from worker server failed: EOF

Error: Failed to sync mailbox INBOX.ferie 2006.: Invalid mailbox name
Error: read() from proxy client failed: EOF

Error: Unexpected finish reply: 1  596fec275888dbd89f6d1f5356c22db6 
   37200   \dsync-expunged 0
Error: Unexpected reply from server: 1 
12200572a70726fca946da6f9378dc0337210   \dsync-expunged 0

Error: Failed to sync mailbox INBOX.INBOX.Gerda: Mailbox doesn't exist: 
INBOX/Gerda
Error: command BOX-LIST failed

Error: read() failed: Broken pipe
Panic: file dsync-worker-local.c: line 1678 
(local_worker_save_msg_continue): assertion failed: (ret == -1)
Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0 [0x367703c680] 
- /usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x35) 
[0x367703c765] - /usr/lib64/dovecot/libdovecot.so.0 [0x367703bb93] - 
/usr/bin/dsync [0x40f48d] - /usr/bin/dsync [0x40f589] - 
/usr/bin/dsync(dsync_worker_msg_save+0x8e) [0x40eb3e] - /usr/bin/dsync 
[0x40d71a] - /usr/bin/dsync [0x40cdbf] - /usr/bin/dsync [0x40d105] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x48) [0x3677047278] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0xd5) [0x36770485c5] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x2d) [0x367704720d] - 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x3677035a83] - 
/usr/bin/dsync(main+0x71e) [0x406c4e] - 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3e3941d994] - /usr/bin/dsync 
[0x406369]


Do you have any idea for what our problems might be? Should we:

avoid SIS ?
avoid doing Maildir on one side and mdbox on the other?
try other dovecot version for dsync?
anything else?


   -jf

- destination server, running dovecot v2.0.14 
mail_attachment_dir = /srv/mailbackup/attachments
mail_location = mdbox:~/mdbox
mail_plugins = zlib
mdbox_rotate_size = 5 M
namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  driver = static
}
plugin {
  zlib_save = gz
  zlib_save_level = 9
}
protocols = 
service auth-worker {
  user = $default_internal_user
}
service auth {
  unix_listener auth-userdb {
mode = 0600
user = mailbackup
  }
}
ssl = no
userdb {
  args = home=/srv/mailbackup/%256Hu/%d/%n
  driver = static
}
-/destination server 


  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Jan-Frode Myklebust
On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote:
 
  --i.e. all the
  suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not
  the case? Is there anything else (beyond moving to a director-based
  architecture) that can mitigate the risk of index corruption? In our
  case, incoming IMAP/POP are 'stuck' to servers based on IP persistence
  for a given amount of time, but incoming LDA is randomly distributed.
 
 What's the problem with director-based architecture?

It hasn't been working reliably for lmtp in v2.0. To quote yourself:

888-8-8-88-888--

I think the way I originally planned LMTP proxying to work is simply too
complex to work reliably, perhaps even if the code was bug-free. So
instead of reading+writing DATA at the same time, this patch changes the
DATA to be first read into memory or temp file, and then from there read
and sent to the LMTP backends:

http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26

888-8-8-88-888--

unfortunately I haven't tested that patch, so I have no idea if it 
fixed the issues or not...


  -jf


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Jan-Frode Myklebust
On Wed, Jan 18, 2012 at 09:03:18PM +0200, Timo Sirainen wrote:
 On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote:
 
  What's the problem with director-based architecture?
  
  It hasn't been working reliably for lmtp in v2.0.
 
 Yes, besides that :)

Besides that it's great!


  unfortunately I haven't tested that patch, so I have no idea if it 
  fixed the issues or not...
 
 I'm not sure if that patch is useful or not. The important patch to fix it is 
 http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c

So with that oneliner on our directors, you expect lmtp proxying trough
director to be better than lmtp to rr-dns towards backend servers? If so,
I guess we should give it another try.


  -jf


[Dovecot] resolve mail_home ?

2012-01-17 Thread Jan-Frode Myklebust

I now have mail_home = /srv/mailstore/%256RHu/%d/%n. Is there any way
of asking dovecot where a user's home directory is?

It's not in doveadm user:

$ doveadm user -f home janfr...@lyse.net
$ doveadm user janfr...@tanso.net
userdb: janfr...@tanso.net
  mail  : mdbox:~/mdbox
  quota_rule: *:storage=1048576

Alternatively, is there an easy way to calculate the %256RHu hash ?


  -jf


[Dovecot] dsync conversion and ldap attributes

2012-01-13 Thread Jan-Frode Myklebust

I have:

mail_home = /srv/mailstore/%256RHu/%d/%n
mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u

userdb {
args = /etc/dovecot/dovecot-ldap.conf.ext
driver = ldap
}

and the dovecot-ldap.conf.ext specifies:

user_attrs = mailMessageStore=home, mailLocation=mail, 
mailQuota=quota_rule=*:storage=%$


Now I want to convert individual users to mdbox using dsync, but how to
I tell location2 to not fetch home and mail from ldap and use
different mail_location (mdbox:~/mdbox) ? I.e. I want converted accounts stored 
in
mail_location mdbox:/srv/mailstore/%256RHu/%d/%n/mdbox.



  -jf


[Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread Jan-Frode Myklebust
On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote:
 Nice setup.  I've mentioned GPFS for cluster use on this list before,
 but I think you're the only operator to confirm using it.  I'm sure
 others would be interested in hearing of your first hand experience:
 pros, cons, performance, etc.  And a ball park figure on the licensing
 costs, whether one can only use GPFS on IBM storage or if storage from
 others vendors is allowed in the GPFS pool.

I used to work for IBM, so I've been a bit uneasy about pushing GPFS too
hard publicly, for risk of being accused of being biased. But I changed job in
November, so now I'm only a satisfied customer :-)

Pros:
Extremely simple to configure and manage. Assuming root on all
nodes can ssh freely, and port 1191/tcp is open between the
nodes, these are the commands to create the cluster, create a
NSD (network shared disks), and create a filesystem:

# echo hostname1:manager-quorum  NodeFile  # manager 
means this node can be selected as filesystem manager
# echo hostname2:manager-quorum  NodeFile # quorum 
means this node has a vote in the quorum selection
# echo hostname3:manager-quorum  NodeFile # all my nodes 
are usually the same, so they all have same roles.
# mmcrcluster  -n  NodeFile  -p $(hostname) -A

### sdb1 is either a local disk on hostname1 (in which case the 
other nodes will access it over tcp to
### hostname1), or a SAN-disk that they can access directly 
over FC/iSCSI.
# echo sdb1:hostname1::dataAndMetadata::  DescFile # This disk 
can be used for both data and metadata
# mmcrnsd -F DescFile

# mmstartup -A  # starts GPFS services on all nodes
# mmcrfs /gpfs1 gpfs1 -F DescFile
# mount /gpfs1

You can add and remove disks from the filesystem, and change most
settings without downtime. You can scale out your workload by adding
more nodes (SAN attached or not), and scale out your disk performance
by adding more disks on the fly. (IBM uses GPFS to create
scale-out NAS solutions 
http://www-03.ibm.com/systems/storage/network/sonas/ ,
which highlights a few of the features available with GPFS)

There's no problem running GPFS on other vendors disk systems. I've 
used Nexsan
SATAboy earlier, for a HPC cluster. One can easily move from one 
disksystem to
another without downtime.

Cons:
It has it's own page cache, staticly configured. So you don't get the 
all
available memory used for page caching behaviour as you normally do on 
linux.

There is a kernel module that needs to be rebuilt on every
upgrade. It's a simple process, but it needs to be done and means we
can't just run yum update ; reboot to upgrade.

% export SHARKCLONEROOT=/usr/lpp/mmfs/src
% cp /usr/lpp/mmfs/src/config/site.mcr.proto 
/usr/lpp/mmfs/src/config/site.mcr
% vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, 
LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION
% cd /usr/lpp/mmfs/src/ ; make clean ; make World
% su - root
# export SHARKCLONEROOT=/usr/lpp/mmfs/src
# cd /usr/lpp/mmfs/src/ ; make InstallImages


 
 To this point IIRC everyone here doing clusters is using NFS, GFS, or
 OCFS.  Each has its downsides, mostly because everyone is using maildir.
  NFS has locking issues with shared dovecot index files.  GFS and OCFS
 have filesystem metadata performance issues.  How does GPFS perform with
 your maildir workload?

Maildir is likely a worst case type workload for filesystems. Millions
of tiny-tiny files, making all IO random, and getting minimal controller
read cache utilized (unless you can cache all active files). So I've
concluded that our performance issues are mostly design errors (and the
fact that there were no better mail storage formats than maildir at the
time these servers were implemented). I expect moving to mdbox will 
fix all our performance issues.

I *think* GPFS is as good as it gets for maildir storage on clusterfs,
but have no number to back that up ... Would be very interesting if we
could somehow compare numbers for a few clusterfs'. 

I believe our main limitation in this setup is the iops we can get from
the backend storage system. It's hard to balance the IO over enough
RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each),
and we're always having hotspots. Right now two arrays are doing 100 iops,
while others are doing 4-500 iops. Would very much like to replace
it by something smarter where we can utilize SSDs for active data and
something slower for stale data. GPFS can manage this by itself trough
it's ILM interface, but we don't have the very fast storage to put in as
tier-1.


  -jf


Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Jan-Frode Myklebust
On Tue, Jan 03, 2012 at 02:00:08PM +0200, Timo Sirainen wrote:
 
 So here on source you have namespace separator '.' and in destination
 you have separator '/'? Maybe that's the problem? Try with both having
 '.' separator.

I added this namespace on the destination:

namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  separator = .
  type = private
}

and am getting the same error:

dsync-remote(janfr...@tanso.net): Error: Can't delete mailbox directory 
INBOX.a: Mailbox has children, delete them first

This was with a freshly created .a.b folder on source. With no messages
in .a.b and also no plain .a folder on source:

$ find /usr/local/atmail/users/j/a/janfr...@tanso.net/.a*
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/maildirfolder
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/cur
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/new
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/tmp
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/dovecot-uidlist


  -jf


Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Jan-Frode Myklebust
On Tue, Jan 03, 2012 at 02:34:59PM +0200, Timo Sirainen wrote:
 On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote:
  dsync-remote(janfr...@tanso.net): Error: Can't delete mailbox directory 
  INBOX.a: Mailbox has children, delete them first
 
 Oh, this happens only with dsync backup, and only with Maildir++ - FS
 layout change. You can simply ignore this error, or patch with
 http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it.

Oh, it was so quick to fail that I didn't realize it had successfully
updated the remote mailboxes :-) Thanks!

But isn't it a bug that users are allowed to create folders named .a.b,
or that dovecot creates this as a folder named .a.b instead of .a/.b
when the separator is . ?


  -jf


Re: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread Jan-Frode Myklebust
On Wed, Jan 04, 2012 at 12:09:39AM -0600, l...@airstreamcomm.net wrote:
  Could you remark on GPFS services hosting mail storage over a WAN between 
 two geographically separated data centers?

I haven't tried that, but know the theory quite well. There are  2 or 3 options:

1 - shared SAN between the data centers. Should work the same as
 a single data center, but you'd want to use disk quorum or
 a quorum node on a 3. site to avoid split brain.

2 - different SANs on the two sites. Disks on SAN1 would belong
to failure group 1 and disks on SAN2 would belong to failure 
group 2. GPFS will write every block to disks in different
failure groups. Nodes on location 1 will use SAN1 directly,
and write to SAN2 via tcp/ip to nodes on location 2 (and vica
versa). It's configurable if you want to return success when
first block is written (asyncronous replication), or if you
need both replicas to be written. Ref: mmcrfs -K:


http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mmcrfs.html

   With asyncronous replication it will try to allocate both
   replicas, but if it fails you can re-establish the
   replication level later using mmrestripefs.

   Reading will happen from a direct attached disk if possible,
   and over tcp/ip if there are no local replica of the needed
   block.

   Again you'll need a quorum node on a 3. site to avoid split brain.


3 - GPFS multi-cluster. Separate GPFS clusters on the two
locations. Let them mount each others filesystems over IP,
and access disks over either SAN or IP network. Each cluster is
managed locally, if one site goes down the other site also
loses access to the fs.

I don't have any experience with this kind of config, but believe
it's quite popular to use to share fs between HPC-sites.


http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/index.html

http://www.cisl.ucar.edu/hss/ssg/presentations/storage/NCAR-GPFS_Elahi.pdf


  -jf


Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-02 Thread Jan-Frode Myklebust
On Mon, Jan 02, 2012 at 09:51:00AM -0500, Charles Marcus wrote:
 
 dovecot -n output? What are you using for the namespace hierarchy separator?

I have the folder format default separator (maildir .), but still dovecot 
creates 
directories named .a.b.

On receiving dsync server:
=
$ dovecot -n
# 2.0.14: /etc/dovecot/dovecot.conf
mail_location = mdbox:~/mdbox
mail_plugins = zlib
mdbox_rotate_size = 5 M
passdb {
  driver = static
}
plugin {
  zlib_save = gz
  zlib_save_level = 9
}
protocols = 
service auth-worker {
  user = $default_internal_user
}
service auth {
  unix_listener auth-userdb {
mode = 0600
user = mailbackup
  }
}
ssl = no
userdb {
  args = home=/srv/mailbackup/%256Hu/%d/%n
  driver = static
}


On POP/IMAP-server:


=
$ doveconf -n
# 2.0.14: /etc/dovecot/dovecot.conf
auth_cache_size = 100 M
auth_verbose = yes
auth_verbose_passwords = sha1
disable_plaintext_auth = no
login_trusted_networks = 192.168.0.0/16
mail_gid = 3000
mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u
mail_plugins = quota zlib
mail_uid = 3000
maildir_stat_dirs = yes
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date
mmap_disable = yes
namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  quota = maildir:UserQuota
  sieve = /sieve/%1u/%1.1u/%u/.dovecot.sieve
  sieve_dir = /sieve/%1u/%1.1u/%u
  sieve_max_script_size = 1M
  zlib_save = gz
  zlib_save_level = 6
}
postmaster_address = postmas...@example.net
protocols = imap pop3 lmtp sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  client_limit = 4521
  unix_listener auth-userdb {
group = 
mode = 0600
user = atmail
  }
}
service imap-login {
  inet_listener imap {
address = *
port = 143
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 1 G
}
service imap-postlogin {
  executable = script-login /usr/local/sbin/imap-postlogin.sh
}
service imap {
  executable = imap imap-postlogin
  process_limit = 2048
}
service lmtp {
  client_limit = 1
  inet_listener lmtp {
address = *
port = 24
  }
  process_limit = 25
}
service managesieve-login {
  inet_listener sieve {
address = *
port = 4190
  }
  service_count = 1
}
service pop3-login {
  inet_listener pop3 {
address = *
port = 110
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 1 G
}
service pop3-postlogin {
  executable = script-login /usr/local/sbin/pop3-postlogin.sh
}
service pop3 {
  executable = pop3 pop3-postlogin
  process_limit = 2048
}
ssl = no
userdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
protocol lmtp {
  mail_plugins = quota zlib sieve
}
protocol imap {
  imap_client_workarounds = delay-newmail
  mail_plugins = quota zlib imap_quota
}
protocol pop3 {
  mail_plugins = quota zlib
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_uidl_format = UID%u-%v
}
protocol sieve {
  managesieve_logout_format = bytes=%i/%o
}



  -jf


[Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-01 Thread Jan-Frode Myklebust
I'm in the processes of running our first dsync backup of all users
(from maildir to mdbox on remote server), and one problem I'm hitting 
 that dsync will work fine on first run for some users, and then
reliably fail whenever I try a new run:

$ sudo dsync -u janfr...@example.net backup ssh -q 
mailbac...@repo1.example.net dsync -u janfr...@example.net
$ sudo dsync -u janfr...@example.net backup ssh -q 
mailbac...@repo1.example.net dsync -u janfr...@example.net
dsync-remote(janfr...@example.net): Error: Can't delete mailbox 
directory INBOX/a: Mailbox has children, delete them first

The problem here seems to be that this user has a maildir named
.a.b. On the backup side I see this as a/b/.

So dsync doesn't quite seem to agree with itself for how to handle
folders with dot in the name.


   -jf


Re: [Dovecot] doveadm + dsync merging

2011-12-30 Thread Jan-Frode Myklebust
On Thu, Dec 29, 2011 at 08:59:35PM +0100, Attila Nagy wrote:
 
 Slightly different, but it would be good to have a persistently
 running daemon which could operate both in server and client mode.
 In server mode it would listen on a TCP socket. In client mode it
 would accept source and target information via a control socket. The
 target IP address and port would be the daemon's listening socket.
 

Great idea! 

 
 The next thing would be to follow dovecot logs and do a sync/async
 replication. :)

It's not too hard to do async already.. If you have last-login tracking in
the post-login scripts, you can use this to know which users to trigger
async backups for every X minute. 


  -jf


[Dovecot] lmtp-postlogin ?

2011-12-30 Thread Jan-Frode Myklebust
We have last-login tracking for imap and pop, and I intend to use this
for deciding which users to backup daily. But, it would also be nice to
backup users who has only received messages, but not logged in lately..
So is it possible to implement last-login tracking for lmtp ?

I naively tried copying the settings from imap, but it didn't work:

service lmtp-postlogin {
  executable = script-login /usr/local/sbin/lmtp-postlogin.sh
  unix_listener lmtp-postlogin {
  }
}

service lmtp {
  executable = lmtp lmtp-postlogin
   snip


  -jf


Re: [Dovecot] lmtp-postlogin ?

2011-12-30 Thread Jan-Frode Myklebust
On Fri, Dec 30, 2011 at 02:03:34PM +0200, Timo Sirainen wrote:
 
 LMTP supports authentication, but Dovecot doesn't support it. And you most 
 likely didn't mean that anyway.

Yes, I know.. 

 So, when would it be executed? When client connects? After each RCPT TO? 
 After DATA?

For my async backup-purposes any time after RCPT TO would be fine. I
just want to know which users has received any message the last X hours.
But i guess the ideal place would be at the time lmtp logs that it's
saved a message to a mailbox. Guess a workaround is to grep for these
in the log.

 Maybe create a new plugin for this using notify plugin.

Is there any documentation for this plugin? I've tried searching both
this list, and the wiki's.


  -jf


Re: [Dovecot] Compressing existing maildirs

2011-12-30 Thread Jan-Frode Myklebust
On Thu, Dec 29, 2011 at 07:00:03AM -0600, Stan Hoeppner wrote:
  We just got rid of the legacy app that worked directly against the
  maildirs, which is the reason we now can turn on compression. I
  intend to switch to mdbox, but first I need to free up some disks by
  compressing the existing maildirs (12 TB maildirs, should probably
  compress down to less than half).
 
 How much additional space do you expect the conversion process to
 compressed mdbox to consume?

Somewhere around 1/3 of the current usage, I expect..

 It shouldn't need much.  Using dsync, the
 conversion will be done one mailbox at a time and the existing emails
 will be compressed when written into the new mdbox mailbox.

Yes, I know, but I intend to do more than just convert to mdbox. I want
to fix the whole folder structure*, in a new filesystem with different
settings (turn on metadata-replication, and possibly also data
replication). So I need to free up some disks before this can start.

[*] move away from @Mails /atmail/a/b/abuser@domain folder structure to
mdbox:/srv/mailbackup/%256Hu/%d/%n, stop having home=inbox, 
possibly use many smaller fs's instead of one huge, move the indexes
inside home...


  -jf


Re: [Dovecot] Compressing existing maildirs

2011-12-30 Thread Jan-Frode Myklebust
On Fri, Dec 30, 2011 at 06:38:28PM -0600, Stan Hoeppner wrote:
 
 Roger that.  Good strategy.  You using SAN storage or local RAID?  What
 filesystem do you plan to use for the new mailbox location?  What OS is
 the Dovecot host?

IBM DS4800 SAN-storage. Filesystem is IBM GPFS, which stripe all I/O
over all the RAID5 LUNs it has assigned. Kind of like RAID5+0. To guard
against disaster if one RAID5 array should fail, we plan on replicating
the filesystem metadata on different sets for LUNs.

OS is RHEL (currently RHEL4 and RHEL5, but new servers are implemented
on RHEL6).

 Lastly, how many users you have?  Sorry for prying,

I'd rather not say.. but we're an ISP, with about 250.000 residential
customers and multiple mailboxes per customer.

 I'm always really curious about system details when someone states they
 have 12TB of mailbox data. ;)

$ df -h /usr/local/atmail/users
FilesystemSize  Used Avail Use% Mounted on
/dev/atmailusers   14T   12T  2.1T  85% /usr/local/atmail/users
$ df -hi /usr/local/atmail/users
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/atmailusers145M109M 37M   75% 
/usr/local/atmail/users

Looking forward to reducing the number of inodes when we finally move to
mdbox.. Should do wonders to the backup process.


  -jf


  1   2   >