Re: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread Jan-Frode Myklebust
On Wed, Jan 04, 2012 at 12:09:39AM -0600, l...@airstreamcomm.net wrote:
>  Could you remark on GPFS services hosting mail storage over a WAN between 
> two geographically separated data centers?

I haven't tried that, but know the theory quite well. There are  2 or 3 options:

1 - shared SAN between the data centers. Should work the same as
 a single data center, but you'd want to use disk quorum or
 a quorum node on a 3. site to avoid split brain.

2 - different SANs on the two sites. Disks on SAN1 would belong
to failure group 1 and disks on SAN2 would belong to failure 
group 2. GPFS will write every block to disks in different
failure groups. Nodes on location 1 will use SAN1 directly,
and write to SAN2 via tcp/ip to nodes on location 2 (and vica
versa). It's configurable if you want to return success when
first block is written (asyncronous replication), or if you
need both replicas to be written. Ref: mmcrfs -K:


http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mmcrfs.html

   With asyncronous replication it will try to allocate both
   replicas, but if it fails you can re-establish the
   replication level later using "mmrestripefs".

   Reading will happen from a direct attached disk if possible,
   and over tcp/ip if there are no local replica of the needed
   block.

   Again you'll need a quorum node on a 3. site to avoid split brain.


3 - GPFS multi-cluster. Separate GPFS clusters on the two
locations. Let them mount each others filesystems over IP,
and access disks over either SAN or IP network. Each cluster is
managed locally, if one site goes down the other site also
loses access to the fs.

I don't have any experience with this kind of config, but believe
it's quite popular to use to share fs between HPC-sites.


http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/index.html

http://www.cisl.ucar.edu/hss/ssg/presentations/storage/NCAR-GPFS_Elahi.pdf


  -jf


Re: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread l...@airstreamcomm.net
Great information, thank you.  Could you remark on GPFS services hosting mail 
storage over a WAN between two geographically separated data centers?

- Reply message -
From: "Jan-Frode Myklebust" 
To: "Stan Hoeppner" 
Cc: "Timo Sirainen" , 
Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing 
maildirs)
Date: Tue, Jan 3, 2012 2:14 am


On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote:
> Nice setup.  I've mentioned GPFS for cluster use on this list before,
> but I think you're the only operator to confirm using it.  I'm sure
> others would be interested in hearing of your first hand experience:
> pros, cons, performance, etc.  And a ball park figure on the licensing
> costs, whether one can only use GPFS on IBM storage or if storage from
> others vendors is allowed in the GPFS pool.

I used to work for IBM, so I've been a bit uneasy about pushing GPFS too
hard publicly, for risk of being accused of being biased. But I changed job in
November, so now I'm only a satisfied customer :-)

Pros:
Extremely simple to configure and manage. Assuming root on all
nodes can ssh freely, and port 1191/tcp is open between the
nodes, these are the commands to create the cluster, create a
NSD (network shared disks), and create a filesystem:

# echo hostname1:manager-quorum > NodeFile  # "manager" 
means this node can be selected as filesystem manager
# echo hostname2:manager-quorum >> NodeFile # "quorum" 
means this node has a vote in the quorum selection
# echo hostname3:manager-quorum >> NodeFile # all my nodes 
are usually the same, so they all have same roles.
# mmcrcluster  -n  NodeFile  -p $(hostname) -A

### sdb1 is either a local disk on hostname1 (in which case the 
other nodes will access it over tcp to
### hostname1), or a SAN-disk that they can access directly 
over FC/iSCSI.
# echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk 
can be used for both data and metadata
# mmcrnsd -F DescFile

# mmstartup -A  # starts GPFS services on all nodes
# mmcrfs /gpfs1 gpfs1 -F DescFile
# mount /gpfs1

You can add and remove disks from the filesystem, and change most
settings without downtime. You can scale out your workload by adding
more nodes (SAN attached or not), and scale out your disk performance
by adding more disks on the fly. (IBM uses GPFS to create
scale-out NAS solutions 
http://www-03.ibm.com/systems/storage/network/sonas/ ,
which highlights a few of the features available with GPFS)

There's no problem running GPFS on other vendors disk systems. I've 
used Nexsan
SATAboy earlier, for a HPC cluster. One can easily move from one 
disksystem to
another without downtime.

Cons:
It has it's own page cache, staticly configured. So you don't get the 
"all
available memory used for page caching" behaviour as you normally do on 
linux.

There is a kernel module that needs to be rebuilt on every
upgrade. It's a simple process, but it needs to be done and means we
can't just run "yum update ; reboot" to upgrade.

% export SHARKCLONEROOT=/usr/lpp/mmfs/src
% cp /usr/lpp/mmfs/src/config/site.mcr.proto 
/usr/lpp/mmfs/src/config/site.mcr
% vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, 
LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION
% cd /usr/lpp/mmfs/src/ ; make clean ; make World
% su - root
# export SHARKCLONEROOT=/usr/lpp/mmfs/src
# cd /usr/lpp/mmfs/src/ ; make InstallImages


> 
> To this point IIRC everyone here doing clusters is using NFS, GFS, or
> OCFS.  Each has its downsides, mostly because everyone is using maildir.
>  NFS has locking issues with shared dovecot index files.  GFS and OCFS
> have filesystem metadata performance issues.  How does GPFS perform with
> your maildir workload?

Maildir is likely a worst case type workload for filesystems. Millions
of tiny-tiny files, making all IO random, and getting minimal controller
read cache utilized (unless you can cache all active files). So I've
concluded that our performance issues are mostly design errors (and the
fact that there were no better mail storage formats than maildir at the
time these servers were implemented). I expect moving to mdbox will 
fix all our performance issues.

I *think* GPFS is as good as it gets for maildir storage on clusterfs,
but have no number to back that up ... Would be very interesting if we
could somehow compare numbers for a few clusterfs'. 

I believe our main limitation in this setup is the iops we can get from
the backend storage system. It's hard to balance the IO over enough
RAID arrays (the fs is spread over 11 RAID5 arrays o

Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread WJCarpenter

On 1/3/2012 5:25 PM, Charles Marcus wrote:
I think ya'll are missing the point... not sure, because I'm still not 
completely sure that this is saying what I think it is saying (that's 
why I asked)...


I'm sure I'm not missing the point.  My comment was that password length 
and complexity are probably more important than bcrypt versus sha1, and 
you've already addressed those.  Given that you use strong 15-character 
passwords, pretty much all hash functions are already out of reach for 
brute force.  bcrypt is probably better in the same sense that it's 
harder to drive a car to Saturn than it is to drive to Mars.




Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Michael Orlitzky

On 01/03/2012 08:25 PM, Charles Marcus wrote:


What I'm worried about is the worst case scenario of someone getting
ahold of the entire user database of *stored* passwords, where they can
then take their time and brute force them at their leisure, on *their*
*own* systems, without having to hammer my server over smtp/imap and
without the automated limit of *my* fail2ban getting in their way.


To prevent rainbow table attacks, salt your passwords. You can make them 
a little bit more difficult in plenty of ways, but salt is the /solution/.




As for people writing their passwords down... our policy is that it is a
potentially *firable* *offense* (never even encountered one case of
anyone posting their password, and I'm on these systems off and on all
the time) if they do post these anywhere that is not under lock and key.
Also, I always set up their email clients for them (on their
workstations and on their phones - and of course tell it to remember the
password, so they basically never have to enter it.


You realize they're just walking around with a $400 post-it note with 
the password written on it, right?


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread David Ford
On 01/03/2012 08:25 PM, Charles Marcus wrote:
>
> I think ya'll are missing the point... not sure, because I'm still not
> completely sure that this is saying what I think it is saying (that's
> why I asked)...
>
> I'm not worried about *active* brute force attacks against my server
> using the standard smtp or imap protocols - fail2ban takes care of
> those in a hurry.
>
> What I'm worried about is the worst case scenario of someone getting
> ahold of the entire user database of *stored* passwords, where they
> can then take their time and brute force them at their leisure, on
> *their* *own* systems, without having to hammer my server over
> smtp/imap and without the automated limit of *my* fail2ban getting in
> their way.
>
> As for people writing their passwords down... our policy is that it is
> a potentially *firable* *offense* (never even encountered one case of
> anyone posting their password, and I'm on these systems off and on all
> the time) if they do post these anywhere that is not under lock and
> key. Also, I always set up their email clients for them (on their
> workstations and on their phones - and of course tell it to remember
> the password, so they basically never have to enter it.

perhaps.  part of my point along that of brute force resistance, is that
when security becomes onerous to the typical user such as requiring
non-repeat passwords of "10 characters including punctuation and mixed
case", even stalwart policy followers start tending toward avoiding it. 
if anyone has a stressful job, spends a lot of time working, missing
sleep, is thereby prone to memory lapse, it's almost a sure guarantee
they *will* write it down/store it somewhere -- usually not in a
password safe.  or, they'll export their saved passwords to make a
backup plain text copy, and leave it on their Desktop folder but coyly
named and prefixed with a few random emails to grandma, so mr. sysadmin
doesn't notice it.

on a tangent, you should worry about active brute force attacks. 
fail2ban and iptables heuristics become meaningless when the brute
forcing is done by bot nets which is more and more common than
single-host attacks these days.  one IP per attempt in a 10-20 minute
window will probably never trigger any of these methods.



Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Charles Marcus

On 2012-01-03 6:12 PM, WJCarpenter  wrote:

On 1/3/2012 2:38 PM, Simon Brereton wrote:

http://xkcd.com/936/


As they saying goes, entropy ain't what it used to be.

https://www.grc.com/haystack.htm

However, both links actually illustrate the same point: once you get
past dictionary attacks, the length of the password is dominant factor
in the strength of the password against brute force attack.


I think ya'll are missing the point... not sure, because I'm still not 
completely sure that this is saying what I think it is saying (that's 
why I asked)...


I'm not worried about *active* brute force attacks against my server 
using the standard smtp or imap protocols - fail2ban takes care of those 
in a hurry.


What I'm worried about is the worst case scenario of someone getting 
ahold of the entire user database of *stored* passwords, where they can 
then take their time and brute force them at their leisure, on *their* 
*own* systems, without having to hammer my server over smtp/imap and 
without the automated limit of *my* fail2ban getting in their way.


As for people writing their passwords down... our policy is that it is a 
potentially *firable* *offense* (never even encountered one case of 
anyone posting their password, and I'm on these systems off and on all 
the time) if they do post these anywhere that is not under lock and key. 
Also, I always set up their email clients for them (on their 
workstations and on their phones - and of course tell it to remember the 
password, so they basically never have to enter it.


--

Best regards,

Charles


Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Gedalya

On 01/03/2012 05:48 PM, Dennis Guhl wrote:

On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote:

[..]


I would suggest you thoroughly remove the Wheezy 2.0.15 package and

Not to use the Wheezy package might be wise.


install the 1.2.15-7 STABLE package.  Read the documentation for 1.2.x

Alternatively you could use Stephan Bosch's repository:

deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main

Despite the warning at
http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages
they work very stable.


and configure it properly.  Then things will likely work as they should.

Dennis

See http://www.prato.linux.it/~mnencia/debian/dovecot-squeeze/
and http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=592959

I have the packages from this repository running in production on a 
squeeze system, working fine.




Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread WJCarpenter

On 1/3/2012 2:38 PM, Simon Brereton wrote:

http://xkcd.com/936/


As they saying goes, entropy ain't what it used to be.

https://www.grc.com/haystack.htm

However, both links actually illustrate the same point: once you get 
past dictionary attacks, the length of the password is dominant factor 
in the strength of the password against brute force attack.




Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Dennis Guhl
On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote:

[..]

> I would suggest you thoroughly remove the Wheezy 2.0.15 package and

Not to use the Wheezy package might be wise.

> install the 1.2.15-7 STABLE package.  Read the documentation for 1.2.x

Alternatively you could use Stephan Bosch's repository:

deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main

Despite the warning at
http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages
they work very stable.

> and configure it properly.  Then things will likely work as they should.

Dennis


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Simon Brereton
On 3 January 2012 17:30, Charles Marcus  wrote:
> On 2012-01-03 5:10 PM, WJCarpenter  wrote:
>>
>> In his description, he uses the example of passwords which are
>> "lowercase, alphanumeric, and 6 characters long" (and in another place
>> the example is "lowercase, alphabetic passwords which are ≤7
>> characters", I guess to illustrate that things have gotten faster).  If
>> you are allowing your users to create such weak passwords, using bcrypt
>> will not save you/them.  Attackers will just be wasting more of your CPU
>> time making attempts.  If they get a copy of your hashed passwords,
>> they'll likely be wasting their own CPU time, but they have plenty of
>> that, too.
>
>
> I require strong passwords of 15 characters in length. Whats more, they are
> assigned (by me), and the user cannot change it. But, he isn't talking about
> brute force attacks against the server. He is talking about if someone
> gained access to the SQL database where the passwords are stored (as has
> happened countless times in the last few years), and then had the luxury of
> brute forcing an attack locally (on their own systems) against your password
> database.

24+ would be better..

http://xkcd.com/936/

Simon


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread David Ford

On 01/03/2012 05:30 PM, Charles Marcus wrote:
> On 2012-01-03 5:10 PM, WJCarpenter  wrote:
>> In his description, he uses the example of passwords which are
>> "lowercase, alphanumeric, and 6 characters long" (and in another place
>> the example is "lowercase, alphabetic passwords which are ≤7
>> characters", I guess to illustrate that things have gotten faster).  If
>> you are allowing your users to create such weak passwords, using bcrypt
>> will not save you/them.  Attackers will just be wasting more of your CPU
>> time making attempts.  If they get a copy of your hashed passwords,
>> they'll likely be wasting their own CPU time, but they have plenty of
>> that, too.
>
> I require strong passwords of 15 characters in length. Whats more,
> they are assigned (by me), and the user cannot change it. But, he
> isn't talking about brute force attacks against the server. He is
> talking about if someone gained access to the SQL database where the
> passwords are stored (as has happened countless times in the last few
> years), and then had the luxury of brute forcing an attack locally (on
> their own systems) against your password database.

when it comes to brute force, passwords like "51k$jh#21hiaj2" are
significantly weaker than "wePut85umbrellasIn2shoes".  considerably
difficult for humans which makes them far more likely to write it on a
sticky and make it handily available.


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Charles Marcus

On 2012-01-03 5:10 PM, WJCarpenter  wrote:

In his description, he uses the example of passwords which are
"lowercase, alphanumeric, and 6 characters long" (and in another place
the example is "lowercase, alphabetic passwords which are ≤7
characters", I guess to illustrate that things have gotten faster).  If
you are allowing your users to create such weak passwords, using bcrypt
will not save you/them.  Attackers will just be wasting more of your CPU
time making attempts.  If they get a copy of your hashed passwords,
they'll likely be wasting their own CPU time, but they have plenty of
that, too.


I require strong passwords of 15 characters in length. Whats more, they 
are assigned (by me), and the user cannot change it. But, he isn't 
talking about brute force attacks against the server. He is talking 
about if someone gained access to the SQL database where the passwords 
are stored (as has happened countless times in the last few years), and 
then had the luxury of brute forcing an attack locally (on their own 
systems) against your password database.


--

Best regards,

Charles


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Charles Marcus

On 2012-01-03 4:03 PM, David Ford  wrote:

md5 is deprecated, *nix has used sha1 for a while now


That link lumps sha1 in with MD5 and others:

"Why Not {MD5, SHA1, SHA256, SHA512, SHA-3, etc}?"

--

Best regards,

Charles


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread WJCarpenter



Was just perusing this article about how trivial it is to decrypt
passwords that are stored using most (standard) encryption methods (like
MD5), and was wondering - is it possible to use bcrypt with
dovecot+postfix+mysql (or posgres)?


Ooop... forgot the link:

http://codahale.com/how-to-safely-store-a-password/


AFAIK, that web page is correct in a relative sense, but getting bcrypt 
support might not be the most urgent priority.


In his description, he uses the example of passwords which are 
"lowercase, alphanumeric, and 6 characters long" (and in another place 
the example is "lowercase, alphabetic passwords which are ≤7 
characters", I guess to illustrate that things have gotten faster).  If 
you are allowing your users to create such weak passwords, using bcrypt 
will not save you/them.  Attackers will just be wasting more of your CPU 
time making attempts.  If they get a copy of your hashed passwords, 
they'll likely be wasting their own CPU time, but they have plenty of 
that, too.


There are plenty of recommendations for what makes a good password / 
passphrase.  If you are not already enforcing such rules (perhaps also 
with a lookaside to one or more of the leaked tables of passwords 
floating around), then IMHO that's much more urgent.  (One of the best 
twists I read somewhere [sorry, I forget where] was to require at least 
one uppercase and one digit, but to not count them as fulfilling the 
requirement if they were used as the first or last character.)


Side note, but for the sake of precision ... attackers are not literally 
decrypting passwords.  They are guessing passwords and then performing a 
one-way hash to see if they guessed correctly.  As a practical matter, 
that means that you have to ask your users to update their passwords any 
time you change the password storage scheme.  (I don't know enough about 
bcrypt to know if that would be required if you wanted to simply 
increase the work factor.)





Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread David Ford
md5 is deprecated, *nix has used sha1 for a while now


Re: [Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Charles Marcus

On 2012-01-03 3:40 PM, Charles Marcus  wrote:

Hi everyone,

Was just perusing this article about how trivial it is to decrypt
passwords that are stored using most (standard) encryption methods (like
MD5), and was wondering - is it possible to use bcrypt with
dovecot+postfix+mysql (or posgres)?


Ooop... forgot the link:

http://codahale.com/how-to-safely-store-a-password/

But after perusing the wiki:

http://wiki2.dovecot.org/Authentication/PasswordSchemes

it appears not?

Timo - any chance for adding support for it? Or is that web page incorrect?

--

Best regards,

Charles


[Dovecot] Storing passwords encrypted... bcrypt?

2012-01-03 Thread Charles Marcus

Hi everyone,

Was just perusing this article about how trivial it is to decrypt 
passwords that are stored using most (standard) encryption methods (like 
MD5), and was wondering - is it possible to use bcrypt with 
dovecot+postfix+mysql (or posgres)?


--

Best regards,

Charles


Re: [Dovecot] Maildir migration and uids

2012-01-03 Thread David Jonas
On 12/29/11 5:35 AM, Timo Sirainen wrote:
> On 22.12.2011, at 3.52, David Jonas wrote:
> 
>> I'm in the process of migrating a large number of maildirs to a 3rd
>> party dovecot server (from a dovecot server). Tests have shown that
>> using imap to sync the accounts doesn't preserve the uidl for pop3 access.
>>
>> My current attempt is to convert the maildir to mbox and add an X-UIDL
>> header in the process. Run a second dovecot that serves the converted
>> mbox. But dovecot's docs say, "None of these headers are sent to
>> IMAP/POP3 clients when they read the mail".
> 
> That's rather complex.

Thanks, Timo. Unfortunately I don't have shell access at the new dovecot
servers. They have a migration tool that doesn't keep the uids intact
when I sync via imap. Looks like I'm going to have to sync twice, once
with POP3 (which maintains uids) and once with imap skipping the inbox.
Ugh.

>> Is there any way to sync these maildirs to the new server and maintain
>> the uids?
> 
> What Dovecot versions? dsync could do this easily. You could simply install 
> the dsync binary even if you're using Dovecot v1.x.

Good idea with dsync though, I had forgotten about that. Perhaps they'll
do something custom for me.

> You could also log in with POP3 and get the UIDL list and write a script to 
> add them to dovecot-uidlist.
> 


Re: [Dovecot] Deliver all addresses to the same mdbox:?

2012-01-03 Thread Ralf Hildebrandt
* Timo Sirainen :
> On 3.1.2012, at 20.09, Ralf Hildebrandt wrote:
> 
> > For archiving purposes I'm delivering all addresses to the same mdbox:
> > like this:
> > 
> > passdb {
> >  driver = passwd-file
> >  args = username_format=%u /etc/dovecot/passwd
> > }
> > 
> > userdb {
> >  driver = static
> >  args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes
> > }
> 
> allow_all_users=yes is used only when the passdb is incapable of telling if 
> the user exists or not.

Ah, damn :|

> Fails because user doesn't exist in passwd-file, I guess.

Indeed.
 
> Maybe use passdb static? 

Right now I simply solved it by using + addressing like this:

Jan  3 19:42:49 mail postfix/lmtp[2728]: 3THkfd20f1zFvlF: 
to=,
relay=mail.charite.de[private/dovecot-lmtp], delay=0.01, delays=0.01/0/0/0, 
dsn=2.0.0, status=sent (250 2.0.0
 IHdDM9VLA0/aCwAAY73zkw Saved)

Call me lazy :)

> If you also need authentication to work, put passdb static in protocol
> lmtp {} and passdb passwd-file in protocol !lmtp {}

Ah yes, good idea.

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] Deliver all addresses to the same mdbox:?

2012-01-03 Thread Timo Sirainen
On 3.1.2012, at 20.09, Ralf Hildebrandt wrote:

> For archiving purposes I'm delivering all addresses to the same mdbox:
> like this:
> 
> passdb {
>  driver = passwd-file
>  args = username_format=%u /etc/dovecot/passwd
> }
> 
> userdb {
>  driver = static
>  args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes
> }

allow_all_users=yes is used only when the passdb is incapable of telling if the 
user exists or not.

> Yet I'm getting this:
> 
> Jan  3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: 
> to=,
> relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, 
> delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host
> mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 
> <"firstname.lastn...@charite.de"@backup.invalid> User doesn't exist: 
> "firstname.lastn...@charite.de"@backup.invalid (in reply to RCPT TO
> command))

Fails because user doesn't exist in passwd-file, I guess.

Maybe use passdb static? If you also need authentication to work, put passdb 
static in protocol lmtp {} and passdb passwd-file in protocol !lmtp {}

[Dovecot] Deliver all addresses to the same mdbox:?

2012-01-03 Thread Ralf Hildebrandt
For archiving purposes I'm delivering all addresses to the same mdbox:
like this:

passdb {
  driver = passwd-file
  args = username_format=%u /etc/dovecot/passwd
}

userdb {
  driver = static
  args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes
}

Yet I'm getting this:

Jan  3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: 
to=,
relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, 
delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host
mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 
<"firstname.lastn...@charite.de"@backup.invalid> User doesn't exist: 
"firstname.lastn...@charite.de"@backup.invalid (in reply to RCPT TO
command))
(using soft_bounce = yes here in Postfix)

In short: backup.invalid is delivered to dovecot by means of LMTP
(local socket). I thought my static mapping in userdb would enable the
lmtp listener to accept ALL recipients and map their $home to
/home/copymail - why is that not working?

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Stan Hoeppner
On 1/3/2012 10:47 AM, Preacher wrote:
> Actually I took a look inside the folders right after starting up and
> waited for two hours to let Dovecot work.

So two hours after clicking on an IMAP folder the contents of that
folder were still not displayed correctly?

> Saving the whole Maildir into a tar on the same partition also took only
> 2 hours before.

This doesn't have any relevance.

> But nothing did change and when looking at activities with top, the
> server was idle, dovecot not indexing.
> Also I wasn't able to drag new messages to the folder hierachy.

Then something is seriously wrong.  The fact that you "forced" the
Wheezy Dovecot package into a Squeeze system may have something, if not
everything, to do with your problem (somehow I missed this fact in your
previous email).

Debian testing/sid packages are compiled against newer system libraries.
 If you check various logs you'll likely see problems related to this.
This is also why the Backports project exists--TESTING packages are
compiled against the STABLE libraries so newer application revs can be
used on the STABLE distribution.  Currently there is no Dovecot 2.x
backport for Squeeze.

I would suggest you thoroughly remove the Wheezy 2.0.15 package and
install the 1.2.15-7 STABLE package.  Read the documentation for 1.2.x
and configure it properly.  Then things will likely work as they should.

-- 
Stan


Re: [Dovecot] Multiple Maildirs per Virtual User

2012-01-03 Thread Timo Sirainen
On 3.1.2012, at 19.33, Ruslan Nabioullin wrote:

> I changed /etc/dovecot/passwd to:
> my-username_account1:{PLAIN}password:my-username:my-groupuserdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs
> 
> Dovecot creates {tmp,new,cur} dirs within account1 (the root),
> apparently not recognizing the maildirs within the root (e.g.,
> /home/my-username/mail/account1/INBOX/{tmp,new,cur}).

Your client probably only shows subscribed folders, and none are subscribed.



Re: [Dovecot] Multiple Maildirs per Virtual User

2012-01-03 Thread Ruslan Nabioullin
On 01/03/2012 06:52 AM, Timo Sirainen wrote:
> On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote:
>> How would it be possible to configure dovecot (2.0.16) in such a way
>> that it would serve several maildirs (e.g., INBOX, INBOX.Drafts,
>> INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user?
>>
>> I am only able to specify a single maildir, but I want all maildirs in
>> /home/my-username/mail/account1/ to be served.
> 
> Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++.
> http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure
> 
> 

I changed /etc/dovecot/passwd to:
my-username_account1:{PLAIN}password:my-username:my-groupuserdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs

Dovecot creates {tmp,new,cur} dirs within account1 (the root),
apparently not recognizing the maildirs within the root (e.g.,
/home/my-username/mail/account1/INBOX/{tmp,new,cur}).

-- 
Ruslan Nabioullin
rnabioul...@gmail.com



signature.asc
Description: OpenPGP digital signature


Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Preacher
Yes i did, followed this guide you mentioned, it said that it found the 
3 mailboxes I have set-up in total, conversion  took only a few moments.
I guess the mail location was automaticall set correctly as the folder 
hierachy was displayed correctly


Timo Sirainen schrieb:

On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote:

So I forced to install the Debisn 7.0 packages with 2.0.15 and finally
got the server running, I also restarted the whole machine to empty caches.
But the problem I got was that in the huge folder hierarchy the
downloaded headers in the individual folders disappeared, some folders
showed a few very old messages, some none. Also some subfolders disappeared.
I checked this with Outlook and Thunderbird. The difference was, that
Thunderbird shows more messages (but not all) than Outlook in some
folders, but also none in some others. Outlook brought up a message in
some cases, that the connection timed out, although I set the timeout to
60s.

Did you run the Courier migration script?
http://wiki2.dovecot.org/Migration/Courier

Also explicitly setting mail_location would be a good idea.





Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Preacher
Actually I took a look inside the folders right after starting up and 
waited for two hours to let Dovecot work.
Saving the whole Maildir into a tar on the same partition also took only 
2 hours before.
But nothing did change and when looking at activities with top, the 
server was idle, dovecot not indexing.

Also I wasn't able to drag new messages to the folder hierachy.

With courier it takes not more than 5 seconds to download the headers in 
a folder containing more than 3.000 messages.


Stan Hoeppner schrieb:

On 1/2/2012 10:17 AM, Preacher wrote:
...

So I forced to install the Debisn 7.0 packages with 2.0.15 and finally
got the server running, I also restarted the whole machine to empty caches.
But the problem I got was that in the huge folder hierarchy the
downloaded headers in the individual folders disappeared, some folders
showed a few very old messages, some none. Also some subfolders
disappeared.
I checked this with Outlook and Thunderbird. The difference was, that
Thunderbird shows more messages (but not all) than Outlook in some
folders, but also none in some others. Outlook brought up a message in
some cases, that the connection timed out, although I set the timeout to
60s.

...

Anyone a clue what's wrong here?

Absolutely.  What's wrong is a lack of planning, self education, and
patience on the part of the admin.

Dovecot gets its speed from its indexes.  How long do you think it takes
Dovecot to index 37GB of maildir messages, many thousands per directory,
hundreds of directories, millions of files total?  Until those indexes
are built you will not see a complete folder tree and all kinds of stuff
will be missing.

For your education:  Dovecot indexes every message and these indexes are
the key to its speed.  Normally indexing occurs during delivery when
using deliver or lmtp, so the index updates are small and incremental,
keeping performance high.  You tried to do this and expected Dovecot to
instantly process it all:

http://www.youtube.com/watch?v=THVz5aweqYU

If you don't know, that's a coal train car being dumped.  100 tons of
coal in a few seconds.  Visuals are always good teaching tools.  I think
this drives the point home rather well.



Re: [Dovecot] What is normal CPU usage of dovecot imap?

2012-01-03 Thread Timo Sirainen
On 3.1.2012, at 17.38, Mikko Lampikoski wrote:

> On 3.1.2012, at 17.12, Timo Sirainen wrote:
> 
>> On 3.1.2012, at 16.54, Mikko Lampikoski wrote:
>> 
>>> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 
>>> dovecot login / second (peak time).
>>> Server stats says that load is continually over 2 and cpu usage is 60%. top 
>>> says that imap is making this load.
>> 
>> You mean an actual "imap" process? Or more than one imap processes? Or 
>> something else, e.g. "imap-login" process? If there's one long running IMAP 
>> process eating CPU, it might have simply gone to an infinite loop, and 
>> upgrading could help.
> 
> It is "imap" process and process takes cpu like 10-30 seconds and then PID 
> changes to another imap process (process also takes 10% of memory = 150MB).
> Restarting dovecot does not help.

Is the IMAP process always for the same user (or the same few users)? 
verbose_proctitle=yes shows the username in ps output.

> If someone have lots of mails in mailbox can it make effect like this?

Possibly. maildir_very_dirty_syncs=yes is helpful with huge maildirs (I don't 
remember if it exists in v1.1 yet).



Re: [Dovecot] What is normal CPU usage of dovecot imap?

2012-01-03 Thread Mikko Lampikoski
On 3.1.2012, at 17.12, Timo Sirainen wrote:

> On 3.1.2012, at 16.54, Mikko Lampikoski wrote:
> 
>> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 
>> dovecot login / second (peak time).
>> Server stats says that load is continually over 2 and cpu usage is 60%. top 
>> says that imap is making this load.
> 
> You mean an actual "imap" process? Or more than one imap processes? Or 
> something else, e.g. "imap-login" process? If there's one long running IMAP 
> process eating CPU, it might have simply gone to an infinite loop, and 
> upgrading could help.

It is "imap" process and process takes cpu like 10-30 seconds and then PID 
changes to another imap process (process also takes 10% of memory = 150MB).
Restarting dovecot does not help.

>> virtual users are in mysql database and mysqld is running on another server 
>> (this server is ok).
>> Do I need better CPU or is there something going on that I do not understand?
> 
> Your CPU usage should probably be closer to 0%.

I think so too, but I ran out of good ideas. If someone have lots of mails in 
mailbox can it make effect like this?

>> login_process_size: 128
>> login_processes_count: 10
>> login_max_processes_count: 2048
> 
> Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may 
> be helpful.

This loses much of the security benefits, no thanks.

>> mail_nfs_storage: yes
> 
> Do you have more than one Dovecot server? This setting doesn't anyway work 
> reliably. If you've only one server accessing mails, you can set this to "no".

Trying this too, but I think its not going to help..




Re: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread Stan Hoeppner
On 1/3/2012 2:14 AM, Jan-Frode Myklebust wrote:
> On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote:
>> Nice setup.  I've mentioned GPFS for cluster use on this list before,
>> but I think you're the only operator to confirm using it.  I'm sure
>> others would be interested in hearing of your first hand experience:
>> pros, cons, performance, etc.  And a ball park figure on the licensing
>> costs, whether one can only use GPFS on IBM storage or if storage from
>> others vendors is allowed in the GPFS pool.
> 
> I used to work for IBM, so I've been a bit uneasy about pushing GPFS too
> hard publicly, for risk of being accused of being biased. But I changed job in
> November, so now I'm only a satisfied customer :-)

Fascinating.  And good timing. :)

> Pros:
>   Extremely simple to configure and manage. Assuming root on all
>   nodes can ssh freely, and port 1191/tcp is open between the
>   nodes, these are the commands to create the cluster, create a
>   NSD (network shared disks), and create a filesystem:
> 
>   # echo hostname1:manager-quorum > NodeFile  # "manager" 
> means this node can be selected as filesystem manager
>   # echo hostname2:manager-quorum >> NodeFile # "quorum" 
> means this node has a vote in the quorum selection
>   # echo hostname3:manager-quorum >> NodeFile # all my nodes 
> are usually the same, so they all have same roles.
>   # mmcrcluster  -n  NodeFile  -p $(hostname) -A
> 
>   ### sdb1 is either a local disk on hostname1 (in which case the 
> other nodes will access it over tcp to
>   ### hostname1), or a SAN-disk that they can access directly 
> over FC/iSCSI.
>   # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk 
> can be used for both data and metadata
>   # mmcrnsd -F DescFile
> 
>   # mmstartup -A  # starts GPFS services on all nodes
>   # mmcrfs /gpfs1 gpfs1 -F DescFile
>   # mount /gpfs1
> 
>   You can add and remove disks from the filesystem, and change most
>   settings without downtime. You can scale out your workload by adding
>   more nodes (SAN attached or not), and scale out your disk performance
>   by adding more disks on the fly. (IBM uses GPFS to create
>   scale-out NAS solutions 
> http://www-03.ibm.com/systems/storage/network/sonas/ ,
>   which highlights a few of the features available with GPFS)
> 
>   There's no problem running GPFS on other vendors disk systems. I've 
> used Nexsan
>   SATAboy earlier, for a HPC cluster. One can easily move from one 
> disksystem to
>   another without downtime.

That's good to know.  The only FC SAN arrays I've installed/used are IBM
FasTt 600 and Nexsan SataBlade/Boy.  I much prefer the web management
interface on the Nexsan units, much more intuitive, more flexible.  The
FasTt is obviously much more suitable to random IOPS workloads with its
15k FC disks vs 7.2K SATA disks in the Nexsan units (although Nexsan has
offered 15K SAS disks and SSDs for a while now).

> Cons:
>   It has it's own page cache, staticly configured. So you don't get the 
> "all
>   available memory used for page caching" behaviour as you normally do on 
> linux.

Yep, that's ugly.

>   There is a kernel module that needs to be rebuilt on every
>   upgrade. It's a simple process, but it needs to be done and means we
>   can't just run "yum update ; reboot" to upgrade.
> 
>   % export SHARKCLONEROOT=/usr/lpp/mmfs/src
>   % cp /usr/lpp/mmfs/src/config/site.mcr.proto 
> /usr/lpp/mmfs/src/config/site.mcr
>   % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, 
> LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION
>   % cd /usr/lpp/mmfs/src/ ; make clean ; make World
>   % su - root
>   # export SHARKCLONEROOT=/usr/lpp/mmfs/src
>   # cd /usr/lpp/mmfs/src/ ; make InstallImages

So is this, but it's totally expected since this is proprietary code and
not in mainline.

>> To this point IIRC everyone here doing clusters is using NFS, GFS, or
>> OCFS.  Each has its downsides, mostly because everyone is using maildir.
>>  NFS has locking issues with shared dovecot index files.  GFS and OCFS
>> have filesystem metadata performance issues.  How does GPFS perform with
>> your maildir workload?
> 
> Maildir is likely a worst case type workload for filesystems. Millions
> of tiny-tiny files, making all IO random, and getting minimal controller
> read cache utilized (unless you can cache all active files). So I've

Yep.  Which is the reason I've stuck with mbox everywhere I can over the
years, minor warts and all, and will be moving to mdbox at some point.
IMHO maildir solved one set of problems but created a bigger problem.
Many sites hailed maildir as a savior in many ways, then decried it as
their user base and IO demands exceeded the

Re: [Dovecot] What is normal CPU usage of dovecot imap?

2012-01-03 Thread Timo Sirainen
On 3.1.2012, at 16.54, Mikko Lampikoski wrote:

> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 
> dovecot login / second (peak time).
> Server stats says that load is continually over 2 and cpu usage is 60%. top 
> says that imap is making this load.

You mean an actual "imap" process? Or more than one imap processes? Or 
something else, e.g. "imap-login" process? If there's one long running IMAP 
process eating CPU, it might have simply gone to an infinite loop, and 
upgrading could help.

> virtual users are in mysql database and mysqld is running on another server 
> (this server is ok).
> 
> Do I need better CPU or is there something going on that I do not understand?

Your CPU usage should probably be closer to 0%.

> login_process_size: 128
> login_processes_count: 10
> login_max_processes_count: 2048

Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be 
helpful.

> mail_nfs_storage: yes

Do you have more than one Dovecot server? This setting doesn't anyway work 
reliably. If you've only one server accessing mails, you can set this to "no".




Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Timo Sirainen
On 3.1.2012, at 14.54, Jan-Frode Myklebust wrote:

> But isn't it a bug that users are allowed to create folders named .a.b,

The folder name is "a.b", it just exists in filesystem with Maildir++ as ".a.b".

> or that dovecot creates this as a folder named .a.b instead of .a/.b
> when the separator is "." ?

The separator is the IMAP separator, not the filesystem separator.



[Dovecot] What is normal CPU usage of dovecot imap?

2012-01-03 Thread Mikko Lampikoski
I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot 
login / second (peak time).
Server stats says that load is continually over 2 and cpu usage is 60%. top 
says that imap is making this load.
virtual users are in mysql database and mysqld is running on another server 
(this server is ok).

Do I need better CPU or is there something going on that I do not understand?

# dovecot -n

# 1.1.11: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-4-pve i686 Ubuntu 9.10 nfs
log_timestamp: %Y-%m-%d %H:%M:%S 
protocols: imap imaps pop3 pop3s
listen: *, [::]
ssl_ca_file: /etc/ssl/**.crt
ssl_cert_file: /etc/ssl/**.crt
ssl_key_file: /etc/ssl/**.key
ssl_key_password: **
disable_plaintext_auth: no
verbose_ssl: yes
shutdown_clients: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/lib/dovecot/imap-login
login_executable(imap): /usr/lib/dovecot/imap-login
login_executable(pop3): /usr/lib/dovecot/pop3-login
login_greeting_capability(default): yes
login_greeting_capability(imap): yes
login_greeting_capability(pop3): no
login_process_size: 128
login_processes_count: 10
login_max_processes_count: 2048
mail_max_userip_connections(default): 10
mail_max_userip_connections(imap): 10
mail_max_userip_connections(pop3): 3
first_valid_uid: 99
last_valid_uid: 100
mail_privileged_group: mail
mail_location: maildir:/var/vmail/%d/%n:INDEX=/var/indexes/%d/%n
fsync_disable: yes
mail_nfs_storage: yes
mbox_write_locks: fcntl
mbox_min_index_size: 4
mail_executable(default): /usr/lib/dovecot/imap
mail_executable(imap): /usr/lib/dovecot/imap
mail_executable(pop3): /usr/lib/dovecot/pop3
mail_process_size: 2048
mail_plugin_dir(default): /usr/lib/dovecot/modules/imap
mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap
mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3
imap_client_workarounds(default): outlook-idle
imap_client_workarounds(imap): outlook-idle
imap_client_workarounds(pop3): 
pop3_client_workarounds(default): 
pop3_client_workarounds(imap): 
pop3_client_workarounds(pop3): outlook-no-nuls
auth default:
  mechanisms: plain login cram-md5
  cache_size: 1024
  passdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  userdb:
driver: static
args: uid=99 gid=99 home=/var/vmail/%d/%n allow_all_users=yes
  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth-client
  mode: 432
  user: postfix
  group: postfix
master:
  path: /var/run/dovecot/auth-master
  mode: 384
  user: vmail







Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Jan-Frode Myklebust
On Tue, Jan 03, 2012 at 02:34:59PM +0200, Timo Sirainen wrote:
> On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote:
> > dsync-remote(janfr...@tanso.net): Error: Can't delete mailbox directory 
> > INBOX.a: Mailbox has children, delete them first
> 
> Oh, this happens only with dsync backup, and only with Maildir++ -> FS
> layout change. You can simply ignore this error, or patch with
> http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it.

Oh, it was so quick to fail that I didn't realize it had successfully
updated the remote mailboxes :-) Thanks!

But isn't it a bug that users are allowed to create folders named .a.b,
or that dovecot creates this as a folder named .a.b instead of .a/.b
when the separator is "." ?


  -jf


Re: [Dovecot] lmtp-postlogin ?

2012-01-03 Thread Timo Sirainen
On Fri, 2011-12-30 at 14:08 +0100, Jan-Frode Myklebust wrote:
> > Maybe create a new plugin for this using notify plugin.
> 
> Is there any documentation for this plugin? I've tried searching both
> this list, and the wiki's.

Nope. You could look at mail-log and
http://dovecot.org/patches/2.0/touch-plugin.c and write something based
on them.




Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Timo Sirainen
On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote:
>   dsync-remote(janfr...@tanso.net): Error: Can't delete mailbox directory 
> INBOX.a: Mailbox has children, delete them first

Oh, this happens only with dsync backup, and only with Maildir++ -> FS
layout change. You can simply ignore this error, or patch with
http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it.




Re: [Dovecot] Newbie: LDA Isn't Logging

2012-01-03 Thread Timo Sirainen
On Mon, 2012-01-02 at 22:48 -0800, Michael Papet wrote:
> Hi,
> 
> I'm a newbie having some trouble getting deliver to log anything.  Related to 
> this, there are no return values unless the -d is missing.  I'm using LDAP to 
> store virtual domain and user account information.
> 
> Test #1: /usr/lib/dovecot/deliver -e -f mpa...@yahoo.com -d 
> z...@mailswansong.dom < bad.mail
> Expected result: supposed to fail, there's no zed account via ldap lookup and 
> supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA.  
> Supposed to log too.
> Actual result: nothing gets delivered, no return code, nothing is logged.

As in return code is 0? Something's definitely wrong there then.

First check that deliver at least reads the config file. Add something
broken in there, such as: "foo=bar" at the beginning of dovecot.conf.
Does deliver fail now?

Also running deliver via strace could show something useful.




Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Jan-Frode Myklebust
On Tue, Jan 03, 2012 at 02:00:08PM +0200, Timo Sirainen wrote:
> 
> So here on source you have namespace separator '.' and in destination
> you have separator '/'? Maybe that's the problem? Try with both having
> '.' separator.

I added this namespace on the destination:

namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  separator = .
  type = private
}

and am getting the same error:

dsync-remote(janfr...@tanso.net): Error: Can't delete mailbox directory 
INBOX.a: Mailbox has children, delete them first

This was with a freshly created .a.b folder on source. With no messages
in .a.b and also no plain .a folder on source:

$ find /usr/local/atmail/users/j/a/janfr...@tanso.net/.a*
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/maildirfolder
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/cur
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/new
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/tmp
/usr/local/atmail/users/j/a/janfr...@tanso.net/.a.b/dovecot-uidlist


  -jf


Re: [Dovecot] Dsync fails on second sync for folders with dot in the name

2012-01-03 Thread Timo Sirainen
On Sun, 2012-01-01 at 20:59 +0100, Jan-Frode Myklebust wrote:
> I'm in the processes of running our first dsync backup of all users
> (from maildir to mdbox on remote server), and one problem I'm hitting 
>  that dsync will work fine on first run for some users, and then
> reliably fail whenever I try a new run:
> 
>   $ sudo dsync -u janfr...@example.net backup ssh -q 
> mailbac...@repo1.example.net dsync -u janfr...@example.net
>   $ sudo dsync -u janfr...@example.net backup ssh -q 
> mailbac...@repo1.example.net dsync -u janfr...@example.net
>   dsync-remote(janfr...@example.net): Error: Can't delete mailbox 
> directory INBOX/a: Mailbox has children, delete them first
> 
> The problem here seems to be that this user has a maildir named
> ".a.b". On the backup side I see this as "a/b/".
> 
> So dsync doesn't quite seem to agree with itself for how to handle
> folders with dot in the name.

So here on source you have namespace separator '.' and in destination
you have separator '/'? Maybe that's the problem? Try with both having
'.' separator.




Re: [Dovecot] dsync / separator / namespace config-problem

2012-01-03 Thread Timo Sirainen
On Thu, 2011-12-29 at 21:03 +0100, Jan-Frode Myklebust wrote:
> On Thu, Dec 29, 2011 at 03:49:57PM +0200, Timo Sirainen wrote:
> > >> 
> > >> With mdbox the internal separator is '/', but it's not valid to have 
> > >> "INBOX." prefix then (it should be "INBOX/").
> > > 
> > > But how should this be handled in the migration phase from maildir to
> > > mdbox then? Can we have different namespaces for users with maildirs vs.
> > > mdboxes? (..or am i misunderstanding something?)
> > 
> > You'll most likely want to keep the '.' separator with mdbox, at
> least initially. Some clients don't like if the separator changes.
> Perhaps in future if you want to allow users to use '.' character in
> mailbox names you could change it, or possibly make it a per-user
> setting.
> > 
> 
> Sorry for being so dense, but I don't quite get it still. Do you suggest
> dropping the trailing dot from prefix=INBOX. ? I.e.
> 
>   namespace {
>   inbox = yes
>   location = 
>   prefix = INBOX
>   type = private
>   separator = .
>   }
> 
> when we do the migration to mdbox? And this should work without issues
> for both current maildir users, and mdbox users ?

With that setup you can't even start up Dovecot. The prefix must end
with the separator. So initially just do it like above, but with
"prefix=INBOX."

> Ideally I don't want to use the . as a separator, since it's causing
> problems for our users who expect to be able to use them in folder
> names. But I don't understand if I can change them without causing
> problems to existing users.. or how these problems will appear to the
> users.

It's going to be problematic to change the separator for existing users.
Clients can become confused.



Re: [Dovecot] Multiple Maildirs per Virtual User

2012-01-03 Thread Timo Sirainen
On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote:
> How would it be possible to configure dovecot (2.0.16) in such a way
> that it would serve several maildirs (e.g., INBOX, INBOX.Drafts,
> INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user?
> 
> I am only able to specify a single maildir, but I want all maildirs in
> /home/my-username/mail/account1/ to be served.

Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++.
http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure




Re: [Dovecot] Problem with huge IMAP Archive after Courier migration

2012-01-03 Thread Timo Sirainen
On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote:
> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally 
> got the server running, I also restarted the whole machine to empty caches.
> But the problem I got was that in the huge folder hierarchy the 
> downloaded headers in the individual folders disappeared, some folders 
> showed a few very old messages, some none. Also some subfolders disappeared.
> I checked this with Outlook and Thunderbird. The difference was, that 
> Thunderbird shows more messages (but not all) than Outlook in some 
> folders, but also none in some others. Outlook brought up a message in 
> some cases, that the connection timed out, although I set the timeout to 
> 60s.

Did you run the Courier migration script?
http://wiki2.dovecot.org/Migration/Courier

Also explicitly setting mail_location would be a good idea.




Re: [Dovecot] error bad file number with compressed mbox files

2012-01-03 Thread Timo Sirainen
On Mon, 2012-01-02 at 15:33 +0100, Jürgen Obermann wrote:

> can dsync convert from compressed mbox to compressed mdbox format?
> 
> When I use compressed mbox files, either with gzip or with bzip2, I can 
> read the mails as usual, but I find the following errors in dovecots log 
> file:
> 
> imap(userxy): Error: nfs_flush_fcntl: 
> fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number
> imap(userxy): Error: nfs_flush_fcntl: 
> fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number

This happens because of mail_nfs_* settings. You can either ignore those
errors, or disable the settings. Those settings are useful only if you
attempt to access the same mailbox from multiple servers at the same
time, which is randomly going to fail even with those settings, so they
aren't hugely useful.




Re: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD)

2012-01-03 Thread Timo Sirainen
On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote:

> Jan  2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full 
> (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured
..
> I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about 
> wrong lower/uppercase
> but it's not definitely my problem (I tried all possibilities of 
> lower/uppercas in login).
> 
> I sniffed the plain communication and the "a AUTHENTICATE GSSAPI" line 
> has around 1873 chars.
> When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and 
> I'm now able to login
> to dovecot using gssapi in mutt client.

There was already code that allowed 16kB SAS messages, but that didn't
work for initial SASL reponse with IMAP SASL-IR extension.

> I use also thunderbird (on windows with sspi) and it works ok with 
> LOGIN_MAX_INBUF_SIZE = 1024.

TB probably doesn't support SASL-IR.

> Does anybody have any idea why it's so large or how to fix it another way? 
> It's terrible to
> patch each version of dovecot rpm package. Or is there any possibility to 
> change constant?
> I have no idea how much this should affect memory usage.
> 
> The simple patch I have to use is attached.

I increased it to 4 kB:
http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d



[Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD)

2012-01-03 Thread Ludek Finstrle
Hello,

  I faced the problem with samba (AD) + mutt (gssapi) + dovecot (imap). From 
dovecot log:

Jan  2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no 
auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured

My situation:
CentOS 6.2
IMAP: dovecot --version: 2.0.9 (CentOS 6.2)
MUA: mutt 1.5.20 (CentOS 6.2)
Kerberos: samba4 4.0.0alpha17 as AD PDC

$ klist -e
Ticket cache: FILE:/tmp/krb5cc_1002_Mmg2Rc
Default principal: luf@TEST

Valid starting ExpiresService principal
01/02/12 15:56:16  01/03/12 01:56:16  krbtgt/TEST@TEST
renew until 01/03/12 01:56:16, Etype (skey, tkt): arcfour-hmac, 
arcfour-hmac 
01/02/12 16:33:19  01/03/12 01:56:16  imap/server.test@TEST
Etype (skey, tkt): arcfour-hmac, arcfour-hmac

I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about 
wrong lower/uppercase
but it's not definitely my problem (I tried all possibilities of lower/uppercas 
in login).

I sniffed the plain communication and the "a AUTHENTICATE GSSAPI" line has 
around 1873 chars.
When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and 
I'm now able to login
to dovecot using gssapi in mutt client.

I use also thunderbird (on windows with sspi) and it works ok with 
LOGIN_MAX_INBUF_SIZE = 1024.

Does anybody have any idea why it's so large or how to fix it another way? It's 
terrible to
patch each version of dovecot rpm package. Or is there any possibility to 
change constant?
I have no idea how much this should affect memory usage.

The simple patch I have to use is attached.

Please cc: to me (luf at pzkagis dot cz) as I'm not member of the this list.

Best regards,

Ludek Finstrle
diff -cr dovecot-2.0.9.orig/src/login-common/client-common.h 
dovecot-2.0.9/src/login-common/client-common.h
*** dovecot-2.0.9.orig/src/login-common/client-common.h 2012-01-02 
18:09:53.371909782 +0100
--- dovecot-2.0.9/src/login-common/client-common.h  2012-01-02 
18:30:58.057787619 +0100
***
*** 10,16 
 IMAP: Max. length of a single parameter
 POP3: Max. length of a command line (spec says 512 would be enough)
  */
! #define LOGIN_MAX_INBUF_SIZE 1024
  /* max. size of output buffer. if it gets full, the client is disconnected.
 SASL authentication gives the largest output. */
  #define LOGIN_MAX_OUTBUF_SIZE 4096
--- 10,16 
 IMAP: Max. length of a single parameter
 POP3: Max. length of a command line (spec says 512 would be enough)
  */
! #define LOGIN_MAX_INBUF_SIZE 2048
  /* max. size of output buffer. if it gets full, the client is disconnected.
 SASL authentication gives the largest output. */
  #define LOGIN_MAX_OUTBUF_SIZE 4096


Re: [Dovecot] Compressing existing maildirs

2012-01-03 Thread Timo Sirainen
On 31.12.2011, at 9.54, Stan Hoeppner wrote:

> Timo, is there any technical or sanity based upper bound on mdbox size?
> Anything wrong with using 64MB, 128MB, or even larger for
> mdbox_rotate_size?

Should be fine. The only issue is the extra disk I/O required to recreate the 
files during doveadm purge.



[Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs)

2012-01-03 Thread Jan-Frode Myklebust
On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote:
> Nice setup.  I've mentioned GPFS for cluster use on this list before,
> but I think you're the only operator to confirm using it.  I'm sure
> others would be interested in hearing of your first hand experience:
> pros, cons, performance, etc.  And a ball park figure on the licensing
> costs, whether one can only use GPFS on IBM storage or if storage from
> others vendors is allowed in the GPFS pool.

I used to work for IBM, so I've been a bit uneasy about pushing GPFS too
hard publicly, for risk of being accused of being biased. But I changed job in
November, so now I'm only a satisfied customer :-)

Pros:
Extremely simple to configure and manage. Assuming root on all
nodes can ssh freely, and port 1191/tcp is open between the
nodes, these are the commands to create the cluster, create a
NSD (network shared disks), and create a filesystem:

# echo hostname1:manager-quorum > NodeFile  # "manager" 
means this node can be selected as filesystem manager
# echo hostname2:manager-quorum >> NodeFile # "quorum" 
means this node has a vote in the quorum selection
# echo hostname3:manager-quorum >> NodeFile # all my nodes 
are usually the same, so they all have same roles.
# mmcrcluster  -n  NodeFile  -p $(hostname) -A

### sdb1 is either a local disk on hostname1 (in which case the 
other nodes will access it over tcp to
### hostname1), or a SAN-disk that they can access directly 
over FC/iSCSI.
# echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk 
can be used for both data and metadata
# mmcrnsd -F DescFile

# mmstartup -A  # starts GPFS services on all nodes
# mmcrfs /gpfs1 gpfs1 -F DescFile
# mount /gpfs1

You can add and remove disks from the filesystem, and change most
settings without downtime. You can scale out your workload by adding
more nodes (SAN attached or not), and scale out your disk performance
by adding more disks on the fly. (IBM uses GPFS to create
scale-out NAS solutions 
http://www-03.ibm.com/systems/storage/network/sonas/ ,
which highlights a few of the features available with GPFS)

There's no problem running GPFS on other vendors disk systems. I've 
used Nexsan
SATAboy earlier, for a HPC cluster. One can easily move from one 
disksystem to
another without downtime.

Cons:
It has it's own page cache, staticly configured. So you don't get the 
"all
available memory used for page caching" behaviour as you normally do on 
linux.

There is a kernel module that needs to be rebuilt on every
upgrade. It's a simple process, but it needs to be done and means we
can't just run "yum update ; reboot" to upgrade.

% export SHARKCLONEROOT=/usr/lpp/mmfs/src
% cp /usr/lpp/mmfs/src/config/site.mcr.proto 
/usr/lpp/mmfs/src/config/site.mcr
% vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, 
LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION
% cd /usr/lpp/mmfs/src/ ; make clean ; make World
% su - root
# export SHARKCLONEROOT=/usr/lpp/mmfs/src
# cd /usr/lpp/mmfs/src/ ; make InstallImages


> 
> To this point IIRC everyone here doing clusters is using NFS, GFS, or
> OCFS.  Each has its downsides, mostly because everyone is using maildir.
>  NFS has locking issues with shared dovecot index files.  GFS and OCFS
> have filesystem metadata performance issues.  How does GPFS perform with
> your maildir workload?

Maildir is likely a worst case type workload for filesystems. Millions
of tiny-tiny files, making all IO random, and getting minimal controller
read cache utilized (unless you can cache all active files). So I've
concluded that our performance issues are mostly design errors (and the
fact that there were no better mail storage formats than maildir at the
time these servers were implemented). I expect moving to mdbox will 
fix all our performance issues.

I *think* GPFS is as good as it gets for maildir storage on clusterfs,
but have no number to back that up ... Would be very interesting if we
could somehow compare numbers for a few clusterfs'. 

I believe our main limitation in this setup is the iops we can get from
the backend storage system. It's hard to balance the IO over enough
RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each),
and we're always having hotspots. Right now two arrays are doing <100 iops,
while others are doing 4-500 iops. Would very much like to replace
it by something smarter where we can utilize SSDs for active data and
something slower for stale data. GPFS can manage this by itself trough
it's ILM interface, but we don't have the very fast storage to p