Re: [Dovecot] Dovecot 1.1.11 failing to accept SSL/TLS connections

2009-03-19 Thread Mike Brudenell
Hi!
Sorry for the delay in replying: I was waiting for the problem to recur so I
could double-check the logs and the states of the imap-login processes.

2009/3/13 Timo Sirainen t...@iki.fi

 On Mon, 2009-03-09 at 17:41 +, Mike Brudenell wrote:
  We have grown to suspect it is to do with one of the imap-login processes
  having a large number of files open. Killing the process seems to get rid
 of
  the problem.

 You didn't mention if you actually saw Too many open files errors in
 log file. If you didn't, the problem isn't with ulimits.


No, there's no sign of the Too many open files error message in the
logfiles.


 Likewise the output of the pfiles command on process 12436 (which is the
 one
  I believe to be problematic) indicates its limit still has some available
 --
  I'm guessing Dovecot has reduced the limit down to 533 from the 10128 set
 in
  the startup script:
 
  Current rlimit: 533 file descriptors

 Yes, v1.1 drops the number of fds to the maximum number that it needs.
 Since you had login_max_connections=256, it doesn't need more than twice
 as much of them. The 12436 process probably was very close to the 256
 connections, and after reaching that it would have stopped accepting
 more.


Ah, I see.

When I upgraded from 1.0.15 I had 1.1.11 telling me off for having the fd
limit set too low at 2048 when I started Dovecot. Instead it told me to
raise the limit to at least 10128, so I did. Hence I was a bit surprised to
find the limit lowered down to 533 if it had told me it wanted the higher
number.



 But there do seem to be bugs related to reaching login_max_connections.
 I'm not entirely sure what bugs exactly though. It's just better not to
 reach it. Perhaps you should change the settings to something like:

 login_processes_count = 2
 login_max_connections = 1024

 login_processes_count should be about the same as the number of
 CPUs/cores on the system (maybe +1).


We're running a pair of servers, each with 8 CPUs. So I'm guessing my

login_processes_count = 10

should be OK?

The servers are handling a LOT of client machines. For example I've just
checked the two machines and as I write there are 1881 imap processes on
one, and 1808 on the other.

I'm guessing that if I increase login_max_connections from its current 256
to 1024 this might delay the problem occurring? And perhaps if I were
restart Dovecot in the small hours of the night every few days?

Or is an alternative workaround to change login_process_per_connection from
no to yes?

...If I were to do this am I right in thinking that imap-login then plays no
part in SSL-connected IMAP sessions? As it's imap-login that seems to be
having the problem, anything I can do ti reduce the number of connections
its handling would presumably help?

If it's any help in working out what the problem might be I have the output
from the Solaris pfiles command, which lists all the open files each
process has. The output for a rogue imap-login process shows lots of these
as being S_IFSOCK and connected to clients as expected. There are also lots
which are AF_UNIX as well -- I'm guessing the proxying of SSL connections
through imap-login to the imap process? I can send you (Timo) this file
privately if you think it might help any.

Cheers,
Mike B-)


Re: [Dovecot] Dovecot 1.1.11 failing to accept SSL/TLS connections

2009-03-19 Thread Mike Brudenell
Hi again...

2009/3/19 Mike Brudenell p...@azilo.me.uk


 2009/3/13 Timo Sirainen t...@iki.fi

 On Mon, 2009-03-09 at 17:41 +, Mike Brudenell wrote:
  We have grown to suspect it is to do with one of the imap-login
 processes
  having a large number of files open. Killing the process seems to get
 rid of
  the problem.

 You didn't mention if you actually saw Too many open files errors in
 log file. If you didn't, the problem isn't with ulimits.


 No, there's no sign of the Too many open files error message in the
 logfiles.


However I have just spotted that this is getting logged each time it
happens:

dovecot: Mar 19 13:29:34 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N

There appears to be one of these for each failed connection attempt to the
IMAP server.

Grepping the code suggests this is presumably coming
from src/imap-login/client.c

I'm now peering at the code trying to see if I can spot anything, but have
to confess to not being too wonderful in the area of sockets etc.

Cheers,
Mike B-)


Re: [Dovecot] Dovecot 1.1.11 failing to accept SSL/TLS connections

2009-03-19 Thread Mike Brudenell
Hi (again!) Timo,
2009/3/19 Mike Brudenell p...@azilo.me.uk


 Grepping the code suggests this is presumably coming
 from src/imap-login/client.c

 I'm now peering at the code trying to see if I can spot anything, but have
 to confess to not being too wonderful in the area of sockets etc.

 Cheers,
 Mike B-)


I've been peering at the code in src/imap-login/client.c and in particular
at client_destroy_oldest() trying to see if there's a way it could end up
killing itself, thinking it was the oldest.

So far I've not got anywhere, other than giving myself a headache, and it's
time for me to go home now.

Just one slight oddity I can't quite fathom...

If I understand client_destroy_oldest() correctly it should destroy a number
of clients when it gets called. In our case login_max_connections is 256, so
I'm thinking that

destroy_count = max_connections  CLIENT_DESTROY_OLDEST_COUNT*2 ?
CLIENT_DESTROY_OLDEST_COUNT : I_MIN(max_connections/2, 1);

should set destroy_count to CLIENT_DESTROY_OLDEST_COUNT, which is 16. So I
was expecting to see 16 messages in the log saying Disconnected: Connection
queue full: one as each client was disconnected, and that these would
therefore appear in very quick succession.

Yet in reality although the message is logged frequently I wouldn't say it
was a quick burst of 16. I'm including an extract of these grepped from the
log (with IP addresses obfuscated for privacy) so you can see what I mean
from their timestamps:

% fgrep 'Connection queue full' dovecot | tail -32
dovecot: Mar 19 13:34:55 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:34:55 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:34:57 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:34:57 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:34:57 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS handshaking
dovecot: Mar 19 13:35:04 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=82.132.N.N, lip=144.32.N.N, TLS handshaking
dovecot: Mar 19 13:35:09 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=82.132.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:35:09 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:06 Info: imap-login: Disconnected: Connection queue
full (auth failed, 1 attempts): method=PLAIN, rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:08 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:11 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:15 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:15 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:31 Info: imap-login: Disconnected: Connection queue
full (auth failed, 1 attempts): user=USERNAME, method=PLAIN,
rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:31 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:36:54 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:36:54 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:37:02 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:37:26 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N
dovecot: Mar 19 13:37:39 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=82.132.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:37:50 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:38:08 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:38:08 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=86.129.N.N, lip=144.32.N.N, TLS handshaking
dovecot: Mar 19 13:38:09 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS
dovecot: Mar 19 13:38:09 Info: imap-login: Disconnected: Connection queue
full (no auth attempts): rip=144.32.N.N, lip=144.32.N.N, TLS handshaking
dovecot: Mar 19 13:38:18 Info: imap-login

[Dovecot] Dovecot 1.1.11 failing to accept SSL/TLS connections

2009-03-09 Thread Mike Brudenell
Greetings -
We are running Dovecot 1.1.11 on our servers and have been gradually
migrating people off our old (UW-based) systems onto the new (Dovecot-based)
ones. As a result the new systems are seeing more connections from more
people.

We have started seeing problems reported by users of our webmail service
that they are getting an error indicating the webmail software (Prayer) has
failed to establish an IMAP connection using TLS to Dovecot. Investigations
show it is not just the webmail service that is affected but all mail
clients: it's just other clients retry the connection, whereas Prayer
fails the login and shows the error message:

TLS/SSL failure for username.imap.york.ac.uk: SSL negotiation failed


It seems to be related to one of Dovecot's imap-login processes getting a
lot of file descriptors in use. We initially spawn off 10 imap-login
processes and have each handling 256 connections. Full dovecot -n at the
end of the message, but the relevant settings here are:

login_process_size = 64
login_process_per_connection = no
login_processes_count = 10
login_max_processes_count = 128
login_max_connections = 256


We have grown to suspect it is to do with one of the imap-login processes
having a large number of files open. Killing the process seems to get rid of
the problem.

For example currently we have 11 imap-login processes running, one of which
has 518 open files -- process 12436 in the list below. I suspect that, as in
previous times we've encountered the problem, killing this process will
alleviate the problem. (I'll be doing this later on this evening.)

It is slightly odd that the imap-login processes have a very skewed
distribution of open files, almost as if the algorithm for allocating
connections to a process favours some over others. For example the current
counts of open files are:

Pid   = Open files count

12430 = 42
26818 = 237
12431 = 90
12433 = 12
12438 = 304
12437 = 106
12435 = 190
12432 = 14
12436 = 518
12434 = 32
12429 = 12


Process 12436 was one of the 10 imap-login processes initially created back
on March 3rd. (Process 26818 was the additional imap-login process spawned a
little later on March 4th.)

I don't believe the problem lies with the available file descriptors on the
system: in the script which starts Dovecot I do

date /var/run/dovecot-limits
ulimit -Sa /var/run/dovecot-limits
echo -- /var/run/dovecot-limits
ulimit -Ha /var/run/dovecot-limits
ulimit -Sn 10128
echo == /var/run/dovecot-limits
ulimit -Sa /var/run/dovecot-limits
echo -- /var/run/dovecot-limits
ulimit -Ha /var/run/dovecot-limits
$DOVECOT /var/run/dovecot-limits 21


(The magic 10128 number came from Dovecot 1.1.11 itself, complaining that
the number I had originally was too low.)

Likewise the output of the pfiles command on process 12436 (which is the one
I believe to be problematic) indicates its limit still has some available --
I'm guessing Dovecot has reduced the limit down to 533 from the 10128 set in
the startup script:

Current rlimit: 533 file descriptors


We originally saw this problem with Dovecot 1.0.3 which we were running up
until a couple of weeks ago. From there I upgraded first to 1.0.15 and then
to 1.1.11. I'd hoped that upgrading would fix the problem: I see it has been
mentioned before on the list, but not for a year or two.

Can anyone give any help, please?

Cheers,
Mike Brudenell

Configuration info

Platform: Solaris 10

dovecot -n output is
:
# 1.1.11: /usr/local/dovecot-1.1.11/etc/dovecot.conf
# OS: SunOS 5.10 i86pc  ufs
base_dir: /var/run/dovecot/
log_path: /logfiles/mail/live/dovecot
listen: *:143
ssl_listen: *:993
ssl_cert_file: /usr/local/dovecot/ssl/certs/imapservice-bundle-2007.pem
ssl_key_file: /usr/local/dovecot/ssl/private/imapservice-key-2007.pem
disable_plaintext_auth: no
shutdown_clients: no
login_dir: /var/run/dovecot/login
login_executable: /usr/local/dovecot-1.1.11/libexec/dovecot/imap-login
login_log_format_elements: user=%Lu method=%m rip=%r lip=%l %c
login_process_per_connection: no
login_greeting_capability: yes
login_processes_count: 10
max_mail_processes: 1
mail_max_userip_connections: 20
mail_location:
maildir:/mailstore/messages/%1Ln/%Ln/Maildir:INDEX=/mailstore/index/%1Ln/%Ln:CONTROL=/mailstore/control/%1Ln/%Ln
mail_plugins: quota imap_quota fts fts_squat
mail_log_prefix: [%p]%Us(%Lu):
imap_client_workarounds: delay-newmail
namespace:
  type: private
  separator: /
  inbox: yes
  list: yes
  subscriptions: yes
auth default:
  mechanisms: plain login
  cache_size: 1024
  cache_ttl: 600
  username_chars:
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890
  username_format: %Ln
  passdb:
driver: shadow
  userdb:
driver: passwd
plugin:
  fts: squat
  fts_squat: partial=4 full=4
  quota: fs:user


[Dovecot] OT: Skins for Squirrelmail - looking for a reminder

2008-02-08 Thread Mike Brudenell

Greetings -

On one of the mailing lists I'm on there was a recent-ish discussion  
about webmail clients, and someone mentioned a site selling sets of  
skins for SquirrelMail... the entire set was about $300.


I'm trying to track down the site but seem to have deleted the message  
I'd carefully been keeping.


I recall an off-topic discussion along these lines here recently, but  
can't locate anything about skins in the archives.  Can anyone recall  
this?  Or am I mis-remembering it as being here on this list?


Confused, but then it is Friday... :-)

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



Re: [Dovecot] [OT] Webmail Recommendation

2008-01-11 Thread Mike Brudenell

Greetings -

On 10 Jan 2008, at 21:49, Chris Wakelin wrote:

With Dovecot's caching and indexing, things are much better, but  
there is still a significant overhead on opening lots of  
connections, I fear, especially for mboxes (moving to maildir would  
help of course). I would consider using imapproxy (designed to  
assist with this problem by caching the IMAP connections) but I'm  
not sure whether it would help significantly.


Whatever you do, DON'T move to Maildir if you are using the Prayer  
webmail software!


We have used Prayer here for many years with the UW IMAP server  
backend and first Berkeley, then later MBX, format mail folders.


When we migrated new users to Dovecoe with Maildir folders we  
discovered that Prayer does NOT like Maildir folders.  The reason is  
that Maildir folders are dual-purpose: each can contain any mix of  
messages and sub-folders.  However Prayer is intrinsically designed to  
ONLY work with folders that can contain messages or subfolders, but  
NOT both.  The result is that Prayer can show you the list of folders  
to navigate around, but will not list any messages within any folder.


I checked with Cambridge and this is a known and documented  
restriction with Prayer.  Their solution has been to hack Cyrus to  
prevent dual-use folders.  (Timo kindly supplied us with a patch for  
Dovecot 1.0.x to do likewise.)


We are thinking about moving to a different webmail platform soon, so  
I am following this discussion with interest.


I can confirm that webmail software that uses persistent IMAP  
connections is a big win: it not only lightens load on the webmail  
server machine but also, more importantly, on the IMAP servers.


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



Re: [Dovecot] dovecot-auth consumes 100% CPU time on Solaris 10

2007-11-29 Thread Mike Brudenell

Greetings -

On 29 Nov 2007, at 09:24, Mark Heitmann wrote:

In my $LD_LIBRARY_PATH /usr/lib is behind /usr/local/lib (for  
openldap), although
dovecot-auth was linked with the Solaris lib. The way that works for  
me is the
following LDFLAGS directive to the configure command, because the -- 
with-ldap

flag has no directory option:

LDFLAGS=-L/usr/local/BerkeleyDB/lib -L/usr/local/lib /usr/local/lib/ 
libldap-2.4.so.2


Is there a smarter way to link with the right lib and ignore the  
solaris one?


We used to have terrible problems similar to yours when trying to use  
LD_LIBRARY_PATH.  We now tend to use the -R option as well when  
compiling to specify unusual/specific library directories...


I think I have the following right:

 * -l libraryname searches in an ordered list of locations for a  
library

   named libraryname.

 * -L dirname augments the above ordered list of locations with the
   directory dirname.

If the library is a non-shared one then the above should suffice: the  
library routines needed by your program are hauled into the resulting  
executable and stored there.


However if, as is often the case, the libraries are instead shared  
(ie, have a .so suffix) then their code is NOT hauled into the  
executable, but is instead pulled in when the executable is actually  
run.  The run-time link-loader does this job.


The run-time link-loader also searches an ordered list of directories,  
this time looking for the shared libraries.  However this list is NOT  
affected by the -L option you used when compiling.


Instead the LD_LIBRARY_PATH (and, I think, the LD_RUN_PATH)  
environment variable influences this list.  However it is easy to end  
up with an inappropriate ordering, and so use the wrong shared library  
when running your program.


Using the -R dirname option at compile time hardcodes the named  
directory into your executable.  When it is run this directory is also  
searched for searched libraries, without the need to fiddle on setting  
environment variables up.


Typically you would list the same directories for both -L and -R  
options when you are using unusual places.  Eg,


  cc -o executable prog.c -lsomelib -L /usr/local/BerkeleyDB/lib -R / 
usr/local/BerkeleyDB/lib


(All on one line, of course; the mailer will probably wrap the above.)

It works for us...  :-)

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



Re: [Dovecot] Limiting the size of index files

2007-11-21 Thread Mike Brudenell

Greetings -

On 15 Nov 2007, at 07:36, Timo Sirainen wrote:

That was the plan, Maildir keywords are the only exception. I was  
also thinking about changing that some day so that it's not possible  
to set more than 26 keywords with maildir. I think this is currently  
pretty much a nonissue because keyword support is pretty bad with  
most IMAP clients and no-one has more than maybe 10 different  
keywords.


Anyway, by just delete them I meant dovecot.index.cache files, not  
the others. :)


Eeep!  Can I just check that I'm not doing something silly? ...

Our mail layout is as follows (using user abc1 as an example):

  /mailstore/messages/a/abc1   ... Mail files in Maildir format
  /mailstore/control/a/abc1... Control files
  /mailstore/index/a/abc1  ... Index cache files

The first two are NFS-mounted; the last is on local disk.

Our load balancers direct people to their preferred host, failing  
over to another server if need be.  This means that when someone's  
preferred server is down for maintenance they can still access their  
mail, but end up getting index/cache files created on a non- 
preferred server.


Obviously these need cleaning up from time to time...

So we are currently using a housekeeping job (tmpreaper) to delete  
anything in or below a user's index directory that hasn't been  
accessed for 90 days.


I had thought I had read this was safe to do: that the index and  
caches files would be (re)built from the Maildir files if need be.   
But from what I read above, is it actually only safe to delete the  
dovecot.index.cache files?  That the index.cache files have to be left  
in place to avoid data loss?


Or am I worrying unnecessarily: that it would only affect people using  
more than 26 keywords?  (Rare)


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



Re: [Dovecot] How to upgrade a running Dovecot?

2007-10-05 Thread Mike Brudenell

Hi, Jerry/et al -


On 4 Oct 2007, at 20:47, Jerry Yeager wrote:

Have you considered sending out a message to each user to the  
effect that on some day, darned-early a.m. the system will be  
offline for 30 minutes for maintenance (no incoming email will be  
lost, etc., etc.).


We have around 20,000 users at our site and need to keep downtime of  
the e-mail service to an absolute minimum.  The quietest time is at  
around 4:00am ... when I am sound asleep in bed, and plan to stay  
that way!  :-)


Seriously... I'm not new to timing and managing software upgrades:  
I've been doing it for around 19 years here now.


But what I _am_ new to is Dovecot.  Not knowing the software well  
yet, my questions are in an attempt to find the best way to flip it  
to a new configuration or version with minimal/no disruption to  
connected users.




Scenario 1:  Change to dovecot.conf
===
If I make a change to dovecot.conf am I right in thinking I can
simply send a HUP signal to the main dovecot process to get it to re-
read the configuration file and act on its revised content?



Yes, this is correct.


Good...



Scenario 2:  Altered SSL Certificates
=
I need to replace our current certificates and have prepared new
files containing the replacement certificate and private key.  Am I
right in thinking that I can simply modify dovecot.conf to point at
the new files and send a HUP signal to dovecot?  Specifically, will
new connections use the revised certificates, and existing
connections continue to work OK without interruption?


Ehh not really, the auth child processes can be killed and new ones  
started. See your next scenario question.


...So here you're saying that although the dovecot master process  
re-reads the configuration file, it doing so has no effect on the  
existing authenticator child processes?  And is it these processes  
that are dealing with the SSL connection? ... I'd have thought it was  
either the imap-login or imap processes?




Scenario 3:  Software Upgrade
=
I build a particular version of Dovecot into the tree /usr/local/
dovecot-A.B.C and then have a symlink called dovecot pointing at
the this directory.  To upgrade I can then build the new version
into /usr/local/dovecot-X.Y.Z and test.

To actually switch over the live service to the new X.Y.Z version do
I need to:

   a) Totally shut down the old A.B.C version of Dovecot, thereby
breaking all
  open connections for users?  or

   b) Assuming I am using shutdown_clients = no can I just kill the
master
  dovecot process and then start up the new version?



See the preface, do the update when you typically have few folks  
using the system -- which gives you fewer complaints from users  
should things break on their end.


Yes... However the dovecot.conf configuration file includes a comment  
which says this:


# Should all IMAP and POP3 processes be killed when Dovecot master  
process
# shuts down. Setting this to no means that Dovecot can be upgraded  
without
# forcing existing client connections to close (although that could  
also be
# a problem if the upgrade is eg. because of a security fix). This  
however
# means that after master process has died, the client processes  
can't write

# to log files anymore.
#shutdown_clients = yes

This implies it *is* possible to upgrade the software without  
breaking existing live connections.  I'm trying to get confirmation  
of this along with any side-effects -- for example the comment seems  
to warn that pre-existing connections will no longer be able to write  
to the logfiles after the changeover?



Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] Corrupted index cache file error (Dovecot 1.0.3)

2007-10-05 Thread Mike Brudenell

Greetings -

Now that users are beginning to pile up more on our new Dovecot-based  
IMAP service I'm seeing a small number of entries like this in the  
logfiles:


Corrupted index cache file /mailstore/index/o/ozw100/.INBOX/ 
dovecot.index.cache

: invalid record size

We are using Dovecot 1.0.3 with Maildir folders served over NFS from  
NetApp filers, but the index files are stored on local disk.


By a small number I mean 4 or 5 on each of our two IMAP servers  
since midnight.


  * Should I be worried?
  * Is there anything I need to do fix the files?  (I'm guessing not  
as no

problem reports have come in, and the logs show the people affected
continuing to work throughout the day.)
  * Would upgrading to 1.0.5 help?  (I couldn't see anything in the
Release Notes for either 1.0.4 or 1.0.5 about this)

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] How to upgrade a running Dovecot?

2007-10-04 Thread Mike Brudenell

Greetings -

Could someone confirm how to perform various upgrades on a live  
system running Dovecot please?



Scenario 1:  Change to dovecot.conf
===
If I make a change to dovecot.conf am I right in thinking I can  
simply send a HUP signal to the main dovecot process to get it to re- 
read the configuration file and act on its revised content?



Scenario 2:  Altered SSL Certificates
=
I need to replace our current certificates and have prepared new  
files containing the replacement certificate and private key.  Am I  
right in thinking that I can simply modify dovecot.conf to point at  
the new files and send a HUP signal to dovecot?  Specifically, will  
new connections use the revised certificates, and existing  
connections continue to work OK without interruption?



Scenario 3:  Software Upgrade
=
I build a particular version of Dovecot into the tree /usr/local/ 
dovecot-A.B.C and then have a symlink called dovecot pointing at  
the this directory.  To upgrade I can then build the new version  
into /usr/local/dovecot-X.Y.Z and test.


To actually switch over the live service to the new X.Y.Z version do  
I need to:


  a) Totally shut down the old A.B.C version of Dovecot, thereby  
breaking all

 open connections for users?  or

  b) Assuming I am using shutdown_clients = no can I just kill the  
master

 dovecot process and then start up the new version?

Ideally I want existing connections to remain running, but new  
connections to start up using the new X.Y.Z version of Dovecot.


The comment for shutdown_clients implies this, but also says:

This however means that after master process has died, the client
processes can't write to log files anymore.

So if I understand this correctly then with shutdown_clients = no  
in force then the sequence and behaviour is this? ...


1.  Old version A.B.C of Dovecot running, clients can log  
through the

master dovecot process to the logfiles.

2.  Kill the old master dovecot process, start new X.Y.Z  
version up.


3.  New connections get served by version X.Y.Z.
Old connections DON'T get killed and can continue, BUT can  
no longer

write anything to the logfiles?


With many thanks,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-07 Thread Mike Brudenell

Greetings -

On 7 Aug 2007, at 13:54, Stewart Dean wrote:

Sorry to be so clueless, but all the activity about rquotad drives  
me to admit my puzzlement (or ignorance)...
I run rquotad on my mail server that also runs DCrquotad is  
used by the other 3 hosts (a login/FTP server, a mailing list  
server and a user mgmnt server) that NFS mount the folder and inbox  
filesystem...which are under filesystem quota on the mail server  
where they are physically resident.  AFAIK it is not queried on the  
mail server...after all, filesystem quota is running there.  How/ 
why does DC need/use rquotad?


I think you have answered your own questions actually! ...

rquotad is used to allow other machines that NFS-mount a filestore.   
The rquotad daemon runs on the machine serving up the filestore to  
the other clients.


You say that your mail filestore physically resides on your mail  
server.  That means it is a locally attached disk (not mounted using  
NFS from some other server), and so Dovecot can, and does, obtain  
quotas directly from it: it does not need to ask an rquotad daemon.


In contrast here we have the mailstore on a NetApp filer, and mount  
it over NFS on our machines running Dovecot.  In this case Dovecot  
cannot query the quota directly because the filestore isn't on  
locally attached disk.  Instead it must use an RPC (Remote Procedure  
Call) to ask the rquotad daemon running on the file server (in this  
case the NetApp filer) what the quota usage and limits are.


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-07 Thread Mike Brudenell

Greetings -

On 7 Aug 2007, at 14:36, Nicolas STRANSKY wrote:


In fact the patch applies well, dovecot compiles well, but rquota is
still not functionnal. I have this in config.log:
HAVE_RQUOTA_FALSE='#'
HAVE_RQUOTA_TRUE=''
#define HAVE_RQUOTA

But there is no RPC string in quota-fs.o. Where am I wrong ?


Try a different check: search through your config.h for RQUOTA.  If  
all is well you should have

#define HAVE_RQUOTA
in there.  If it's not then the rquota code isn't going to get  
included.  (Well, that's based on my empirical observations here.)   
If it's not there then try the sequence below...


This is what I did to get the build to include the rquota code:

0.  Apply the patches.
1.  cd to the top level of the distribution directory tree (above src)
2.  Run: autoconf
3.  Run: autoheader
4.  Run: automake
5.  Run: configure
6.  Compile

I think the following is right (forgive me if there's anything wrong)...

autoconf does various tests to see if it is possible to use the  
rquota code (eg, required configuration files and the rpcgen command  
are all available); it builds the configure script from configure.in


autoheader uses configure.in to build config.h.in

automake builds the various Makefile.in files from the corresponding  
Makefile.am files.


configure then builds config.h from config.h.in, and the various  
Makefile files from the Makefile.in files


Then you are ready to compile.

When I first tried using the patches I simply did autoconf then  
automake and found, like you, that the rquota code wasn't included in  
the compilation.  However adding the autoheader step fixed this.


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Userdb and home directories

2007-08-06 Thread Mike Brudenell

Greetings -

A lot of reading and testing has led me part-way to an answer.  If  
anyone can help me get all the way there I'll be really grateful: I  
only have 48 hours now before the system has to go live!


The problem...

We are using userdb passwd to get a user's details from our main  
NIS map.  This returns uid, gid and normal home directory for each user.


However for mail I don't make any use of the home directory, wanting  
a sealed black box environment that isn't dependent on our other  
file server with home directories on it: just the file server with  
the mail directories.


I understand now that I can change a user's home directory between  
the imap-login and imap processes by altering the configuration file  
to point at a script

mail_executable = /usr/local/dovecot/libexec/postlogin-script
and have the script alter the HOME environment variable:
HOME=`printf '/mailstore/control/%1.1s/%s/home' $USER $USER`

[Quick side-question: should I be using $USER or $RESTRICTED_USER  
here?  I can't work out what the difference between them is.  Both  
are set within Dovecot's standard environment.]


HOWEVER...

The problem I'm trying to avoid is having Dovecot refuse to log  
someone in if their home directory gives a Permission denied error  
(eg, when their home directory filer is in a funny state).  This test  
appears to be done very early on, in the imap-login process (I  
think): definitely before the post-login script runs.


Is there some way of overriding the home directory used in the very  
early (imap-login?) process?  At present I can only think of either:


a) Edit /etc/passwd with a dummy home directory for all users to  
appease
   the very early check, then use the postlogin script to set  
the real

   home directory up for the main imap process, or

b) Edit the source code to do likewise.

I keep hoping I've missed something and there is a cleaner way to  
override the value for home returned by the passwd userdb before its  
initial use in src/master/mail-process.c create_mail_process()


Any help gratefully received!

Cheers,
Mike B-)


On 3 Aug 2007, at 11:33, Mike Brudenell wrote:

We use shadow for the passdb and passwd for the userdb (see dovecot  
-n output below).  I'm trying to work out how to override the home  
directory returned from NIS.  Ultimately I'd like to use this  
template:


/mailstore/control/%1Ln/%Ln/home

but for the time being while I'm trying to work out how to do it  
have my own area hard-coded in (as it's only me logging in to the  
test system):


/mailstore/control/p/pmb1/home

I'm specifying this with the args directive in the userdb section  
as follows:


   args = home=/mailstore/control/p/pmb1/home

but it isn't being picked up.  What am I doing wrong, please?
(We want to continue using uids and gids etc from NIS so I don't  
think using the static userdb is the right thing to do?)


--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-03 Thread Mike Brudenell

Greetings -

On 3 Aug 2007, at 01:00, Nicolas STRANSKY wrote:


I think the configure failed to set HAVE_RQUOTA to config.h.


But this is what I had from ./configure:
checking rpcsvc/rquota.h usability... yes
checking rpcsvc/rquota.h presence... yes
checking for rpcsvc/rquota.h... yes


Yup, I got that initially too but didn't get the rquota code included  
in the build: I checked by searching the quota-fs.o file for RPC...


strings quota-fs.o | grep RPC

The string should be present if the rquota code got included.

I'm not familiar with the intricacies of GNU Autoconf etc but managed  
to work out that I need to do the following in the top level of the  
Dovecot source tree (above the src directory):


autoconf
autoheader
automake

If you omit the autoheader step you have configure detect that rquota  
is available, but HAVE_RQUOTA still doesn't get defined: grep the  
config.h file to check whether this is the case.


The autoheader step takes config.h.in and builds config.h, which then  
includes the HAVE_RQUOTA definition.  Compiling your source then works.


Aside:  Can someone confirm whether I'm running these three utilities  
in the

correct order?

Once you get the code included you may find that, like I did, the  
code compiles but refuses to run.  I'll try and work out a patch to  
the Makefile to build the xdr file that needs including (but, as I  
said before, I've never used the GNU autoconf stuff before so forgive  
me if I get it wrong!).


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Userdb and home directories (clarification)

2007-08-03 Thread Mike Brudenell

Greetings -

I'm just feeling I need to clarify my previous message a bit to  
explain the problem better...


On 3 Aug 2007, at 11:33, Mike Brudenell wrote:

We have two NetApp filers: one serves people's home directories;  
the other their message store and control files filestores.


At the moment the first of the two filers is not accessible to my  
Dovecot system and I assumed all would be well because, as far as I  
knew, I wasn't using it at all.


We have two NetApp filers.  One serves people's real home  
directories, and the other is serving the mailstore.  The mailstore  
comprises two separate areas: one with quotas to store the messages  
in Maildir format; the second without quotas to store the control  
files for each user.  The general format of these are:


Message store:  /mailstore/messages/letter/username
Control files:  /mailstore/control/letter/username

where letter is the first character of the username

We want the mail service to operate as a black box, with all  
necessary files stored on its filer.


In particular we do not want anything storing within someone's home  
directory, and want the mail service to continue if the filer service  
home directories is unavailable.


I am using passdb shadow and userdb passwd to authenticate and  
get users' details.  These are being read from NIS, with each user  
having their own uid and gid.


Because it is the general NIS map its home directory field specifies  
the user's real home directory on the other filer.


Because I don't use %h anywhere in Dovecot's configuration I had  
assumed it did not use the home directory at all, and hence is  
independent of the other filer.  However this morning's issue has  
shown this is not the case...


As things stand Dovecot is using chdir() to move to the user's home  
directory, apparently in the early setup after logging in.  As the  
home directories are currently unavailable to my test Dovecot  
service, giving a Permission denied error, Dovecot is aborting the  
session and so I'm not able to read mail.


We can't have this for our production service so I'm trying to find  
out how to make things truly independent of the other (home  
directories) filer.  I've read in the Wiki that it's best to set up a  
home directory for users and will be happy to have this as a  
subdirectory below the control files' directory. For example


/mailstore/control/p/pmb1/home/...

However I can't find a way of telling Dovecot NOT to use the home  
directory returned from the userdb passwd lookup, and instead use  
the above.


I know setting the home directory is possible from userdb static,  
but we don't want everyone to use a single uid/gid: we want them each  
to use their own uids and gids so the filestore-based quotas work.


Can someone guide me in this please?
Either how to override the home directory setting, or an alternative  
way of configuring things to give the black box environment we are  
after?


With many thanks,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] Userdb and home directories

2007-08-03 Thread Mike Brudenell

Greetings -

I've just discovered an oddity I didn't know I had...

We have two NetApp filers: one serves people's home directories; the  
other their message store and control files filestores.


At the moment the first of the two filers is not accessible to my  
Dovecot system and I assumed all would be well because, as far as I  
knew, I wasn't using it at all.


However in practice Dovecot DOES appear to be using it: it is trying  
to chdir() to my home directory when I login, finds it can't at the  
moment (because of a problem giving Permission denied), and closes  
the connection.


I've read the pages on home directories and the userdb on the Wiki  
and it advises that having a home directory is beneficial.  I'm happy  
to create a subdirectory for this within a user's control files  
directory, but do NOT want it on our normal filestore: we can't have  
mail inaccessible because a user's home directory is inaccessible  
because the other filer is out of action.


We use shadow for the passdb and passwd for the userdb (see dovecot - 
n output below).  I'm trying to work out how to override the home  
directory returned from NIS.  Ultimately I'd like to use this template:


/mailstore/control/%1Ln/%Ln/home

but for the time being while I'm trying to work out how to do it have  
my own area hard-coded in (as it's only me logging in to the test  
system):


/mailstore/control/p/pmb1/home

I'm specifying this with the args directive in the userdb section  
as follows:


   args = home=/mailstore/control/p/pmb1/home

but it isn't being picked up.  What am I doing wrong, please?
(We want to continue using uids and gids etc from NIS so I don't  
think using the static userdb is the right thing to do?)


Cheers,
Mike B-)

Output of dovecot -n

# 1.0.3: /usr/local/dovecot-1.0.3/etc/dovecot.conf
log_path: /logfiles/mail/live/dovecot
info_log_path: /logfiles/mail/live/dovecot-info
disable_plaintext_auth: no
login_dir: /var/run/dovecot/login
login_executable: /usr/local/dovecot-1.0.3/libexec/dovecot/imap-login
login_log_format_elements: user=%Lu method=%m rip=%r lip=%l %c
login_process_per_connection: no
login_greeting_capability: yes
login_process_size: 64
login_processes_count: 10
max_mail_processes: 1
mail_location: maildir:/mailstore/messages/%1Ln/%Ln/Maildir:INDEX=/ 
mailstore/index/%1Ln/%Ln:CONTROL=/mailstore/control/%1Ln/%Ln

maildir_copy_with_hardlinks: yes
mail_plugins: quota imap_quota
mail_log_prefix: [%p]%Us(%Lu):
imap_client_workarounds: delay-newmail outlook-idle
namespace:
  type: private
  separator: /
  inbox: yes
auth default:
  mechanisms: plain login
  cache_size: 1024
  cache_ttl: 600
  username_chars:  
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890

  username_format: %Ln
  passdb:
driver: shadow
  userdb:
driver: passwd
args: home=/mailstore/control/p/pmb1/home
plugin:
  quota: fs


--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-02 Thread Mike Brudenell

AAARRGGH!!

On 2 Aug 2007, at 16:19, Nicolas STRANSKY wrote:


This applied and compiled well with v1.0.3.


The patches applied (with fuzz and offsets) to 1.0.3 and compiled OK  
under Solaris 10 with Sun's C compiler.


However when I try and start Dovecot I get:

Edlopen(/usr/local/dovecot-1.0.3/lib/dovecot/imap/ 
lib10_quota_plugin.so) failed: ld.so.1: imap: fatal: relocation  
error: file /usr/local/dovecot-1.0.3/lib/dovecot/imap/ 
lib10_quota_plugin.so: symbol xdr_getquota_args: referenced symbol  
not found

Error: imap dump-capability process returned 89
Fatal: Invalid configuration in /usr/local/dovecot-1.0.3/etc/ 
dovecot.conf


What am I missing?

(The especially annoying thing is that xdr_getquota_args is used in  
the test program included in the source code I sent out a few days  
ago and that worked... but I can't spot what is different in the  
linking/runtime.)


Cheers,
Mike B-(

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-02 Thread Mike Brudenell

On 2 Aug 2007, at 16:56, Mike Brudenell wrote:


What am I missing?

(The especially annoying thing is that xdr_getquota_args is used in  
the test program included in the source code I sent out a few days  
ago and that worked... but I can't spot what is different in the  
linking/runtime.)


Ah!  The sample program I was using #include's the file rquota_xdr.c  
(generated by rpgen from the rquota.x) which defines the function  
xdr_getquota_args()


However there doesn't seem to be anything in the existing 1.0.3  
source or the three patches that provides this function.


Is there something missing from the patches at
http://hg.dovecot.org/dovecot/diff/078d9dde99c8/src/plugins/ 
quota/quota-fs.c

?

Nicolas Stransky: Have you actually got it running yet, or just  
compiled?
  If running, did you have to do anything to  
overcome this

  missing symbol?

Cheers,
Mike B-}

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] NFS rquota support

2007-08-02 Thread Mike Brudenell

Sorry for the flurry of messages! ...

On 2 Aug 2007, at 17:06, Mike Brudenell wrote:

However there doesn't seem to be anything in the existing 1.0.3  
source or the three patches that provides this function.


Is there something missing from the patches at
http://hg.dovecot.org/dovecot/diff/078d9dde99c8/src/plugins/ 
quota/quota-fs.c

?


I've just tried #including the rquota_cdr.c file produced by running  
rpcgen on the rquota.x file (as in the code I sent out a few days ago).


That compiled and runs OK, and successfully returns a storage when  
queries with the GETQUOTAROOT command:


a02 getquotaroot inbox
* QUOTAROOT inbox pmb1
* QUOTA pmb1 (STORAGE 72970240 104857600)
a02 OK Getquotaroot completed.

*BUT* the units returned are not correct: they appear to be coming  
back as bytes when the RFC says that they should be in units of 1024  
octets:


http://rfc.net/rfc2087.html#s3.

The fix is, I think, trivial: just update a comment and divide by  
1024 in a couple of places.  See attached patch, to be used after the  
other three.


The mystery of the missing xdr_getquota_args() still needs sorting  
properly though.


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



quota-fs.c.patch2
Description: Binary data




[Dovecot] Mount options and NFS: just checking...

2007-08-01 Thread Mike Brudenell

Greetings -

I'm now in the last couple of weeks before going live with Dovecot  
(v1.0.3) on our revamped IMAP service.  I'd like to double-check  
about the best mount options to use; could someone advise, please?


I have three separate directory trees for the message store, the  
control files and the index files.  These are arranged as follows:


Message Store
Mounted over NFS from a NetApp filer; filestore quotas are ENABLED.

Control Files
Mounted over NFS from the NetApp filer; filestore quotas are  
DISABLED.


Index Files
Mounted on local disk; filestore quotas are DISABLED.

We will have a pair of Solaris 10 servers on which Dovecot 1.0.3 is  
running.  Users are normally directed to their preferred server  
but, if it is unavailable, will go via the other server.


Q1.  Am I right in thinking that for the Message Store and Control Files
 I should NFS-mount both of them with the actimeo=0?

 (Or would noac be better, which also turns off client write- 
caching

 as well as the attribute cache?)

Q2.  Should I NFS-mount either or both of the Message Store and  
Control Files

 with the noxattr option to turn off extended attributes?

Q3.  Which of the filestores should I mount with the noatime option?
 (I understand that for the filer-based NFS mounts this can be  
done on the

 filer, as the option isn't available with mount for NFS-monted
 filestores.)

Q4.  Any other options to use/miss out?


Currently my understanding from the list and the NFS page at the Wiki  
is:


A1.  Use actimeo=0 for both Message Store and Control Files.
 No idea about the need for/impact of using noac.

A2.  No idea.

A3.  Safe to use noatime for all three filestores.
 (I understand Dovecot will use utime() when needed on such  
filestores,
 but am not sure if it will work on the NFS-mounted filestores  
from the

 filer if access times are turned off at the filer end.)

Any thoughts, please?

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Mount options and NFS: just checking...

2007-08-01 Thread Mike Brudenell

Hi, Timo -

On 1 Aug 2007, at 13:14, Timo Sirainen wrote:
Q3.  Which of the filestores should I mount with the noatime  
option?

A3.  Safe to use noatime for all three filestores.
  (I understand Dovecot will use utime() when needed on such
filestores,
  but am not sure if it will work on the NFS-mounted filestores
from the
  filer if access times are turned off at the filer end.)


If your kernel caches atime updates locally, then maybe you shouldn't
use noatime for message store until v1.1. It doesn't cause real
problems, but Dovecot wastes time looking for old files in tmp/
directories all the time.



Many thanks!  I think I'm nearly there now: could you just clarify on  
the items flagged with (??) below?


In case it makes a difference, messages will generally be delivered  
by a user's preferred IMAP server, but by Exim rather than Dovecot's  
deliver.


actimeo=0 option
NFS-mounted Message Store: NOT needed, but if used can help spot  
new messages

   more quickly
NFS-mounted Control Files: REQUIRED for now
Local Index Files: Not applicable

noatime option
NFS-mounted Message Store: Probably OK to use, but safer not to  
until v1.1

NFS-mounted Control Files: Can be used safely(??)  (Or as above??)
Local Index Files: Can be used safely(??)

With many thanks,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] Dovecot 1.0.x and NFS-mounted mailstore quotas

2007-07-31 Thread Mike Brudenell

Greetings -

On 30 Jul 2007, at 10:05, Joseba Torre wrote:

Could you share it please? As previously reported, the official  
patch is not

working right now.

Aaagur.


I'm attaching the patch files and a tar archive that comprise the  
changes I use to support reading quotas from NFS-mounted filestore.   
The patches are against the Dovecot 1.0.2 source code.


The files within the rpc-quota.tar.gz archive in an old posting to a  
NetApp filers list; they form a sample means of querying the rquotad  
on the filer.


It assumes you have the command rpcgen on your system to process  
the supplied rquota.x file and produce the associated include files.


So it's not quite just applying the patches, but is very simple; just  
save the attached files somewhere and follow the steps below to apply  
them...



mountpoint.c
  This is a patch Timo put out on the list a week or so back  
following my
  problems skipping over filestores with the ignore options set.   
I've used
  this on our non-production server and have found it fine.  You may  
need it
  if you are using the automounter, otherwise the NFS quota code may  
end up
  operating on the automount template point rather than the real  
mounted

  directory.

cd src/lib
gpatch mountpoint.c


The remaining three files are all for the quota plugin directory, so  
start off by going there:


cd src/plugins/quota


rpc-quota.tar.gz
  This is a GZipped tar archive containing a directory to be  
unpacked here.  The
  directory contains the rquota.x and Makefile from the sample code  
I found.  It
  also includes my own rpc-quota.h and rpc-quota-code.c files: these  
get

  included in quota-fs.c by one of the patches below.

gzcat rpc-quota.tar.gz | tar xvf -

  Now move down into the rpc-quota directory you've just unpacked  
and generate

  the other include files you need using rpcgen:

cd rpc-quota
gmake
cd ..


quota-fs.c
  This is a patch that first includes the rpc-quota/rpc-quota.h  
file, which
  includes some header files and define the fetchnfsquotas() we'll  
be using.
  It also changes the code that attempts to get the quota from a  
locally mounted
  filesystem.  If this wasn't possible (root-mount-fd == -1)  
instead of
  returning, call the code included from rpc-quota/rpc-quota-code.c  
to check for
  a : in the path and, if found, use an RPC to get the quota from  
the NFS

  server.

gpatch quota-fs.c


quota.c
  Strictly speaking you don't need this patch, although I would  
suggest it.
  When preparing the quotaroot name this change checks if the name  
would be
  the empty string and, if it is, replaces it with the username  
obtained by
  calling getpwuid(getuid()).  Without this patch Apple's Mail  
program
  recognises the server supports quotas (from the capability  
string), but
  does not show their values or draw the bar-chart.  Basically it  
will only
  do so if the quotaroot name is non-empty.  [The IMAP specification  
allows
  for empty --  -- quotaroot names, so Apple's Mail client is  
technically

  in violation.]

gpatch quota.c

  You'll get output looking something like this:

% gmake
rpcgen -o rquota.h -h rquota.x
cc  -O  -g  -DSUNOS5  -c -o quotatest.o quotatest.c
rpcgen -o rquota_xdr.c -c rquota.x
cc  -O  -g  -DSUNOS5  -c -o rquota_xdr.o rquota_xdr.c
rpcgen -o rquota_clnt.c -l rquota.x
cc  -O  -g  -DSUNOS5  -c -o rquota_clnt.o rquota_clnt.c
cc  -o quotatest quotatest.o rquota_xdr.o rquota_clnt.o  - 
lsocket -lnsl


  Don't worry unduly about the -DSUNOS5 macro being defined: that is  
for the
  quotatest test program, and doesn't seem to be for the files  
generated by

  rpcgen (I think!).



That should be everything: just recompile, install and hope!

You can test it by connecting to your IMAP server and giving a  
QUOTAROOT command:


telnet imap.yourdomain.tld 143
a01 login yourusername yourpassword
a02 getquotaroot inbox
...output appears here...
a03 logout

You should get output that looks something like this:

a02 getquotaroot inbox
* QUOTAROOT inbox pmb1
* QUOTA pmb1 (STORAGE 71260 102400)
a02 OK Getquotaroot completed.

Then give it a go for real.

If you find it doesn't work don't give up hope yet.  Instead rummage  
through your /usr/include directory tree looking for your own  
system's rquota.x file; then try it instead of the one I've included.


Finally, remember that the files in the rpc-quota directory aren't  
included in Dovecot's own Makefiles.  So if you change any of them  
you'll need to update the timestamp of the quota-fs.c plugin before  
make will notice the change and recompile it:


touch src/plugins/quota/quota-fs.c

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



mountpoint.c
Description: Binary 

Re: [Dovecot] RFE: please include quota waning patch

2007-07-30 Thread Mike Brudenell

Greetings -

On 25 Jul 2007, at 15:18, Timo Sirainen wrote:


On 23.7.2007, at 18.41, Farkas Levente wrote:


it'd be very useful to include the quota warning patch in official
release. without it the quota support is not really useful since mail
simple dropped when quota is over. and most enduser never know what
happend, they just recognize mails are not coming:-(


It's already in v1.1 tree, and v1.0 won't be having any new  
features. If it's not featureful enough, use v1.1. :)


Will v1.1 include support for reporting quotas from NFS-mounted  
mailstores please?


We are using filestore mounted over NFS from a NetApp filer with  
Maildir and filestore-based quotas.  It would be nice if Dovecot was  
able to report current quota usage and the maximum allowed.


(I've made a patch which I'm using for this on v1.0.2 at present, but  
am hoping it'll be officially supported in v1.1)


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Small problem with src/lib/mountpoint.c [now with patch attached!]

2007-07-19 Thread Mike Brudenell

Greetings -

On 18 Jul 2007, at 22:31, Timo Sirainen wrote:


Could you try that both of these patches work:

http://hg.dovecot.org/dovecot-1.0/rev/89184bad9d10
http://hg.dovecot.org/dovecot/rev/cc8b6e73e830

I'm not sure from what version Solaris has getextmntent(), but I think
from at least 8 so it should be safe to use. Anyway I didn't want  
to do

that for Dovecot v1.0 just in case.


Looks like they work fine!  Wonderful!

Most of our machines are running Solaris 10 but there are still a  
couple using Solaris 8 and these appear to have getextmntent()  
available.


Thanks ever so,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Small problem with src/lib/mountpoint.c [now with patch attached!]

2007-07-18 Thread Mike Brudenell

Hi Timo, et al -

On 17 Jul 2007, at 18:12, Timo Sirainen wrote:


On 17.7.2007, at 13.55, Mike Brudenell wrote:

auto_direct /mailstore/messages/p   autofs   
direct,ignore,dev=4740014   1184648400
crypt2.york.ac.uk:/vol/vol9/p   /mailstore/messages/p   nfs  
proto=tcp,xattr,dev=4700386 1184668792


Although there are two entries they have different device  
numbers.  The mountpoint_get() function attempts to guard against  
this by checking the device number obtained matches that of the  
path being located.


What do you mean different device numbers? Both have the same mount  
path, so how can they be different?


I'm afraid I don't understand the innards of the automounter.  All I  
can say is what I see...


We use the automounter extensively as it gives central management for  
shares and saves having to edit /etc/vfstab files on umpteen machines  
all over the place.


A host making use of the automounter has entries in the /etc/mnttab  
for all the filestores available to it TO BE mounted, even when they  
AREN'T ACTUALLY mounted at the moment.


When something access a file or directory in the filestore the  
automounter leaps in and furtively mounts the filestore before the  
granting the access.  This results in a SECOND entry for that  
filestore in the /etc/mnttab file, and has the same mount point as  
the first entry.


The two entries have different dev=N values in their options  
fields to distinguish them.  The thing I find confusing/surprising is  
that:


1.  When I visually inspect the contents of /etc/mnttab the two  
entries for

/mailstore/messages/p have DIFFERENT dev=N entries, but

2.  When I put debug logging into Dovecot's loop that iterates  
through the
/etc/mnttab entries using getmntent() both entries come through  
with the
SAME dev=N value ... the first (autofs) entry is returned  
with the

device number from the second (real) mount's entry in the file.

If the different (unique) device numbers were being yielded by  
getmntent()
then there wouldn't be a problem: Dovecot is checking these and  
would

skip the first (autofs) entry as the number was wrong.

I'll do some testing using a minimal test program to check this  
happens then.


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Small problem with src/lib/mountpoint.c [now with patch attached!]

2007-07-18 Thread Mike Brudenell

Ah!  After a bit of test programming I understand what's going on now.

If I'm right then there are a few incorrect assumptions in Dovecot's  
logic.  (Sorry! :-)



On 17 Jul 2007, at 18:12, Timo Sirainen wrote:

auto_direct /mailstore/messages/p   autofs   
direct,ignore,dev=4740014   1184648400
crypt2.york.ac.uk:/vol/vol9/p   /mailstore/messages/p   nfs  
proto=tcp,xattr,dev=4700386 1184668792


Although there are two entries they have different device  
numbers.  The mountpoint_get() function attempts to guard against  
this by checking the device number obtained matches that of the  
path being located.


What do you mean different device numbers? Both have the same mount  
path, so how can they be different?


Looking again at the above two example lines from our /etc/mnttab I  
see that the first entry includes ignore as an option.  As I  
recollected that Dovecot's code in mountpoint_get() checks for  
filesystems marked ignore and should, umm, ignore them(!) when  
searching for the correct entry in the mount tab ... yet it wasn't  
working.


So I've just written a minimal program to display all the entries of / 
etc/mnttab and their fields using getmntent() and now think Dovecot's  
checks are incorrect, at least on Solaris...



PROBLEM 1 :: Skipping swap and ignored filestores is incorrect
--
The mountpoint_get() function in lib/mountpoint.c uses getmntent() to  
fetch each entry from /etc/mnttab in turn.  It then checks the  
ent.mnt_fstype field to see if it is MNTTYPE_SWAP (swap) or  
MNTTYPE_IGNORE (ignore) and, if it is, the entry is skipped.


However I this appears to be incorrect for Solaris 10 (and probably  
earlier versions too).  For example:

  * swap does NOT return swap as the value of ent.mnt_fstype
  * ignored filestores do NOT return ignore as the value of  
ent.mnt_fstype


Instead you get:

  * swap returns swap has the value of ent.mnt_special and tmpfs  
for

ent.mnt_fstype

  * ignored filesystems return ignore as one of the options within  
the
ent.mnt_mntopts field.  (NB: this might be one of several comma- 
separated

options and could appear anywhere in the list.)

So I now believe that:

1.  My fix of also skipping filestores whose ent.mnt_fstype is  
autofs is not
fully correct -- the problem should really be dealt with by  
properly

checking for the ignore option instead;

2.  The ignore option should be being checked within  
ent.mnt_mntopts instead
of ent.mnt_fstype; this would deal with the problem of the  
template autofs

entry being picked up and used as it has the ignore option.

3.  Swap should be detected by checking either ent.mnt_special being  
swap or
ent.mnt_fstype being tmpfs -- I'm not sure which is correct,  
or if both

tests are needed.


PROBLEM 2 :: Matching an entry in /etc/mnttab using getmntent() is  
incorrect
 

As for the oddity with device numbers I mentioned previously, I now  
understand this and think it shows up another incorrect assumption...


When using the automounter you get a permanent entry in /etc/mnttab  
that I will refer to as a 'template' entry.  In its options field it  
includes a device number of the form dev=X where X is a  
hexadecimal number (at least, it is hex on Solaris 10!).


When the automounter REALLY mounts the filestore as it is needed it  
ADDS a second entry for the mounted filesystem.  This has its own  
(different) dev=X number.  Don't ask me why this is: presumably  
there are Good Reasons.


When locating the entry for a path in /etc/mnttab Dovecot tries to do  
this by comparing the device number for the entry with the device  
number of the path being located.  It does this by using stat() on  
the mount point given in the entry.


Whilst this works for manually-listed mounts it does not work when  
the auto-mounter is involved.  This is how Dovecot is being fooled...


1.  We start off with just the automounter's 'template' entry in /etc/ 
mnttab:


auto_direct  /mailstore/messages/p  autofs   
direct,ignore,dev=4740014  1184648400


2.  Dovecot starts by performing a stat on the path to be located.  This
causes the automounter to secretly mount the real filestore --  
adding the

second entry to /etc/mnttab with its different device number:

crypt2.york.ac.uk:/vol/vol9/p  /mailstore/messages/p  nfs   
proto=tcp,xattr,dev=4700386  1184668792


The number stat() returns is THIS device number -- 4700386 --  
and NOT that
from the 'template' (4740014).  This gets stored away in the  
st structure

ready for later comparisons.

3.  Dovecot now uses getmntent() to fetch each entry from /etc/mnttab  
in turn.
It calls stat() on the mount point in the entry to gets its  
device number

into the st2 structure.

In the case of using the automounter the first entry 

[Dovecot] Small problem with src/lib/mountpoint.c

2007-07-17 Thread Mike Brudenell

Greetings -

Whilst playing with getting quotas from NFS-mounted filestores I've  
just discovered a slight problem with src/lib/mountpoint.c


I had things working on a Solaris 10 test machine which had the  
mailstore mounted 'normally' using an entry in /etc/vfstab.


However when I changed to use the automounter for the mailstore  
obtaining the quota broke.


The problem lies with mountpoint_get().  On a 'normally' mounted  
filestore this obtains the device_path of the remote filestore in the  
form


remoteservername.some.domain:/remote/path

However when the mailstore was automounted I was instead getting back

auto_direct

which is the name of the automounter map we're using.

The /etc/mnttab file has TWO entries for the auto-mounted filestore,  
like this:


auto_direct /mailstore/messages/p   autofs   
direct,ignore,dev=4740014   1184648400
crypt2.york.ac.uk:/vol/vol9/p   /mailstore/messages/p   nfs  
proto=tcp,xattr,dev=4700386 1184668792


Although there are two entries they have different device numbers.   
The mountpoint_get() function attempts to guard against this by  
checking the device number obtained matches that of the path being  
located.


However for some reason I don't understand when getmntent() returns  
the FIRST of the above two entries it supplies the device number of  
the SECOND.  Thus the device numbers match in the mountpoint_get()  
test and it thinks it has found the correct entry.  This leads to the  
device_path being set to the map name -- auto_direct -- instead of  
the host:/path


My fix is to add an additional fstype for the test to ignore:  
autofs.  The loop skips over any such mounts (along with the  
existing tests for swap and ignore).


I'm attaching my minimal patch in case it's of help to anyone else.

(Timo: would it be possible to get this added into the distribution  
code if you agree it's right?)


Cheers,
Mike Brudenell

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] Small problem with src/lib/mountpoint.c [now with patch attached!]

2007-07-17 Thread Mike Brudenell

Greetings -

[SIGH: I hit the Send button instead of Attach.  Here's Take 2...]

Whilst playing with getting quotas from NFS-mounted filestores I've  
just discovered a slight problem with src/lib/mountpoint.c


I had things working on a Solaris 10 test machine which had the  
mailstore mounted 'normally' using an entry in /etc/vfstab.


However when I changed to use the automounter for the mailstore  
obtaining the quota broke.


The problem lies with mountpoint_get().  On a 'normally' mounted  
filestore this obtains the device_path of the remote filestore in the  
form


remoteservername.some.domain:/remote/path

However when the mailstore was automounted I was instead getting back

auto_direct

which is the name of the automounter map we're using.

The /etc/mnttab file has TWO entries for the auto-mounted filestore,  
like this:


auto_direct /mailstore/messages/p   autofs   
direct,ignore,dev=4740014   1184648400
crypt2.york.ac.uk:/vol/vol9/p   /mailstore/messages/p   nfs  
proto=tcp,xattr,dev=4700386 1184668792


Although there are two entries they have different device numbers.   
The mountpoint_get() function attempts to guard against this by  
checking the device number obtained matches that of the path being  
located.


However for some reason I don't understand when getmntent() returns  
the FIRST of the above two entries it supplies the device number of  
the SECOND.  Thus the device numbers match in the mountpoint_get()  
test and it thinks it has found the correct entry.  This leads to the  
device_path being set to the map name -- auto_direct -- instead of  
the host:/path


My fix is to add an additional fstype for the test to ignore:  
autofs.  The loop skips over any such mounts (along with the  
existing tests for swap and ignore).


I'm attaching my minimal patch in case it's of help to anyone else.

(Timo: would it be possible to get this added into the distribution  
code if you agree it's right?)


Cheers,
Mike Brudenell

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *



mountpoint.c.patch
Description: Binary data




[Dovecot] Getting quotas from an NFS-mounted filestore

2007-07-16 Thread Mike Brudenell

Greetings -

I was just wondering what the state of play was with regard to  
reading filestore-based quotas when the mailstore is NFS-mounted?


Timo mentioned a little while ago that he'd be including it:
http://www.dovecot.org/list/dovecot/2007-May/022594.html

However it isn't in 1.0.1 and isn't mentioned in the release notes  
for 1.0.2.  Is it going to be a 1.1 feature or will it appear before  
then?


I'm asking because although I can't get the rquota-rquotad patch to  
build on Solaris I have found some code on the net which does do RPC  
based quota lookups.  I'm toying with the idea of trying to modify  
rquota-rquotad to use this other code.  However this is out of my  
area of knowledge so it will be an uphill battle ... so if reading  
quotas from NFS-mounted filestores will be coming Real Soon Now I'll  
probably just wait instead.


So any ideas as to timescales, please?

Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Maildir configuration vs ZFS and snapshots...

2007-07-04 Thread Mike Brudenell

Greetings -

I'm intrigued by your suggestion of providing read-only access to  
mail folders in the snapshots...


On 3 Jul 2007, at 15:20, Peter Eriksson wrote:


Now,I can access the normal Maildir INBOX and folders nicely via
Dovecot/IMAP. The thing is I'd like to be able to access the
snapshot too via Dovecot/IMAP somehow. My first idea was to use
Dovecots namespace feature like this in the config file:

namespace private {
  separator = /
  prefix = snapshot/
		  location = maildir:~/.zfs/snapshot/2007-06-18/ 
Maildir:CONTROL=~/.dovecot/control:INDEX=~/.dovecot/index

}

This works great for the mail folders inside the Maildir - but not  
for the INBOX which isn't displayed at all... (Ie, under the  
snapshot prefix I only see Trash etc when looking at the folder  
from an IMAP client).


We are switching from the UW IMAP server with local disks to Dovecot  
using a NetApp filer.  The filer provides snapshots too, although  
with a slightly different layout as the .snapshot directory appears  
in each and every directory with its files and folders.  For example:


~/Maildir/.snapshot/nightly.0/...
~/Maildir/.snapshot/nightly.1/...
~/Maildir/.snapshot/nightly.2/...
etc

where the .0 suffixed directory is the most recent etc.

In your example your snapshots seem to be in directories named after  
a specific date (when the snapshot was taken?).  I was wondering if  
you had some clever way of providing access to these; in your example  
above you hard-code the names into the Dovecot configuration file,  
which doesn't seem practical for a live system?


Basically I'm trying to mimic your setup and was wondering if you had  
already solved the problem of getting down from your snapshot  
directory through the intermediate level to the Maildir itself.


In passing, would I need to use a separate CONTROL and INDEX storage  
area for these, or will the files for the snapshot folders live  
happily alongside those for the live area?  (I'm not too sure how the  
organisation of these storage areas works when namespaces and  
prefixes are concerned!)


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Maildir configuration vs ZFS and snapshots...

2007-07-04 Thread Mike Brudenell

Greetings -

On 4 Jul 2007, at 11:21, Peter Eriksson wrote:

Uh.. I'n your case I think it would be quite easy. I think this  
should work:


  namespace private {
separator = /
prefix = snapshot-0/
location = maildir:~/Maildir/.snapshot/nightly. 
0:CONTROL=~/.dovecot/control:INDEX=~/.dovecot/index

  }
  namespace private {
separator = /
prefix = snapshot-1/
location = maildir:~/Maildir/.snapshot/nightly. 
1:CONTROL=~/.dovecot/control:INDEX=~/.dovecot/index

  }


I've just been trying something like this and it does indeed work.   
(What I was after was a way of not having to list each nightly.0,  
nightly.1, etc separately, and instead be able to navigate down to  
them: akin to your comments about the design of Maildir not being  
convenient in this respect.)




etc... You won't see the INBOX either (like me) unless you create the
symlink suggested (.INBOX - .) before taking the snapshots.


This is indeed the case.  One downside is that our mail clients then  
see THREE inboxes :-( ... one is the account's real inbox, one the  
symlink, and the third is the one in Outlook's Outlook Today  
section of the list.  (I hate the way this is always shown expanded  
but the IMAP section shown collapsed: the number of people we get who  
click on the Outlook Today inbox thinking it's the one on the IMAP  
servers. :-(



I'd really like to be able to present all the available snapshots  
over IMAP if possible. However, I'm a bit worried though that the  
mail client will walk thru all the mailboxes in all the snapshots  
when accessing the server though...


My testing has shown up a couple of problems...

1)  My mail client (Apple's Mail) does indeed walk through the  
snapshot hierarchy which, unfortunately leads to:
  * Mail duplicating its local cache of all the messages on my Mac's  
hard disk, and

  * Dovecot creating index files for all the mailboxes in the snapshot.

The former takes time and disk space; the latter effectively doubles  
the space used by someone's index files when you add a snapshot.  And  
of course if you have more than one snapshot namespace each has it's  
own set of index files.


2)  Dovecot logs lots of error messages as my mail client traverses  
the folders in the snapshot.  This is because it is trying to use  
utime() to update the timestamp of the directory in a read-only  
filestore.



In passing, would I need to use a separate CONTROL and INDEX  
storage area for these, or will the files for the snapshot folders  
live happily alongside those for the live area?  (I'm not too sure  
how the organisation of these storage areas works when namespaces  
and prefixes are concerned!)


You need to point the CONTROL and INDEX storage things outside of  
the readonly snapshots or Dovecot will complain when it wants to  
update the indexes...


It looks like the CONTROL and INDEX settings for the snapshot  
shouldn't use the same setting as is used by the live mailboxes.  I  
took the precaution of pointing them at a directory called SNAPSHOT  
hanging off the normal INDEX storage location and found that  
immediately below it I got a set of Maildirs for the folders in the  
snapshot.


Thus if I had pointed the INDEX setting for the snapshot at the same  
storage location as for the live folders I think I'd have had two  
different sets of folders -- the live and its equivalent in the  
snapshot -- sharing the same index file.  If so I suspect the results  
would not have been good!  (The same goes for CONTROL files, of course.)



Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Deferring, instead of rejecting, messages when at quota

2007-04-26 Thread Mike Brudenell

Greetings -

On 26 Apr 2007, at 10:36, Magnus Holmgren wrote:

In Exim, you can set the temp_errors option of the transport. I  
don't know
what code is returned in this case, but I have temp_errors =  
73:75:77:78:89.
If Exim takes care of all the hard errors (like nonexisting  
user...) you can

almost set temp_errors = *.


Hmmm... I'm continuing to peer at the source code of Dovecot's  
deliver and related libraries and have a feeling it's not going to be  
as straight forward as the above.


It looks as though deliver calls routines in lib-storage to try and  
write out the message.  If these detect a problem they flag the  
error, and the routine they use to do so indicates it is not a  
temporary one.  Deliver sees the write failed and, seeing that  
temporary_error is FALSE itself generates and sends the error response.


I'm currently trying to work out how this behaviour can be changed  
without affecting other things.


For example it probably isn't safe to change the function used by the  
storage routines to flag that any and every error logged through it  
is a temporary_error: there may well be other places it gets called  
where the error is NOT temporary.


Likewise I'm not sure if I can simply change the source of deliver to  
always return a temporary failure code because again there may be  
other problems that occur which should be permanently failed.


I think I need help from Timo on this one, unless someone has already  
worked this one out previously?


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] imaptest10 and stalled messages

2007-03-30 Thread Mike Brudenell

Hi Timo et al,

On 29 Mar 2007, at 17:52, Mike Brudenell wrote:

But then the ODDNESS starts.  I'm still a little hazy how to  
interpret the output of imaptest, but every now and then one or two  
processes stall for several seconds.  When this happens activity  
seems to grow quieter in the imaptest output: number of operations  
per second decreases and the N/N count drops.  Eventually it clears  
somehow and things spring back into life...


Further to my message to the list yesterday I'm still baffled and  
concerned as to why imaptest10 shows stalls in SELECT occasionally  
and, when it does so, it looks like all other clients are blocked or  
something.


When the Maildir mailstore is mounted over NFS from our NetApp filer  
to the Solaris 10 box Dovecot and imaptest10 are running on the  
problem shows.


Switch to using local disk for the mailstore and run imaptest10 with  
the same number of clients and there are no stalls.  But increase the  
number of simulated clients (from 50 to 100) and they come back, but  
not too badly at that setting.


So it looks like something to do with when the system gets really  
loaded...


I think the things I'd like to know are:

1.  Are other people on the List running Dovecot with Maildir mailstore
NFS-mounted from NetApp filers and having it work OK?
(If you are using Solaris 10, what mount options are you using?)

2.  How much real-life user load does running imaptest10 with 50  
simulated
clients equate to?  I assume each simulated user is hammering  
away at

its IMAP connection, so should equate to several (how many?) normal
users in real-life operation?

3.  I'm concerned by the N/M number at the end of the imaptest10 output
lines plummeting whenever one process goes into this stalled state:
it almost suggests as if the only thing the other processes can  
do is

logout?  Are other sessions really being blocked, or is it just
imaptest10 behaving like this?

As far as I can tell I *think* it's only imaptest10 getting  
blocked:

when it is happening for an extended period I can quickly manually
Telnet in to port 143, login as one of the test usernames and  
SELECT

INBOX just fine.  So it's probably NOT all of the Dovecot processes
getting blocked, but imaptest10 that drives them.  Does that sound
plausible?

Help!  (Concerned, and hoping we're not going down the wrong road...  
can anyone reassure me about the Solaris 10/NFS/NetApp filer setup?)


With thanks,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *


EXAMPLE OUTPUT FROM IMAPTEST10
==

  24   13   20   28   25   36   10   14   23   24   48  50/ 50
  32   15   15   28   28   46   18   16   35   32   64  50/ 50
  21   13   14   27   30   3298   21   22   42  50/ 50
- 45. stalled for 16 secs in SELECT
   05267   27   10   18   26   28   58  21/ 21  ===
  46   22   25   40   38   388   12   21   17   34  50/ 50
  28   11   13   24   22   3296   24   28   56  50/ 50
  20   10   15   24   27   38   13   10   24   20   40  50/ 50
  299   11   25   21   33   13   14   28   29   58  50/ 50
  28   17   12   32   37   43   17   15   27   28   56  50/ 50

and

  34   15   13   28   27   47   16   20   34   34   68  50/ 50
  18   13   16   27   25   239   13   17   18   36  50/ 50
  2198   22   25   36   17   13   23   22   42  50/ 50
- 30. stalled for 16 secs in SELECT
- 37. stalled for 16 secs in SELECT
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
   05277   28   11   12   28   29   60  20/ 20  ===
  41   16   18   31   26   2464   12   11   22  50/ 50
  299   15   28   29   42   10   13   27   29   58  50/ 50




[Dovecot] imaptest10 and stalled messages

2007-03-28 Thread Mike Brudenell

Greetings -

I've now got as far as playing with the imaptest10 test utility to  
see if I can stress-test our development server.  imaptest10 is built  
against dovecot-1.0rc28


It may just be that I'm excessively heavy handed, but when I let  
imaptest10 rip with the command...


./imaptest10 user=test%03d host=testserver.imap.york.ac.uk clients=50  
mbox=./dovecot.mbox msgs=1000 secs=30 logout=10


...after 15 seconds or so I started getting output along the lines of:

Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100%  10%
  30%  5%
   20111000000  50/ 50
   20011000000  50/ 50
   00220111110  50/ 50
   20220111112  50/ 50
   00110000000  50/ 50
- 16. stalled for 16 secs in LOGIN
- 17. stalled for 16 secs in LOGIN
- 18. stalled for 16 secs in LOGIN
   21121000000  50/ 50
- 17. stalled for 17 secs in LOGIN
- 18. stalled for 17 secs in LOGIN
- 22. stalled for 16 secs in LOGIN
- 23. stalled for 16 secs in LOGIN
- 24. stalled for 16 secs in LOGIN
   10110000000  50/ 50
- 17. stalled for 18 secs in LOGIN
- 18. stalled for 18 secs in LOGIN
- 22. stalled for 17 secs in LOGIN
- 23. stalled for 17 secs in LOGIN
- 24. stalled for 17 secs in LOGIN
- 28. stalled for 16 secs in LOGIN
- 29. stalled for 16 secs in LOGIN
- 30. stalled for 16 secs in LOGIN
- 31. stalled for 16 secs in LOGIN
- 32. stalled for 16 secs in LOGIN

The stalled log lines continue until the test completes.  I'm  
assuming it's just that I'm pushing our hardware beyond its  
reasonable limits: it is only an oldish development box...


Sun v120 with 1 650MHz SPARC processor and 1Gbyte RAM

Dovecot is using shadow password file authentication. The mail  
folders are in Maildirs, NFS-mounted from a NetApp filer, and each  
account's INBOX has around 1000 messages in it.  Control and Index  
files are on local disk.


By the end of the 30 second test run load average has climbed to  
around 11


I don't know how active the imaptest10 utility is when it's  
running, but do you think I should be concerned by the stalled  
messages?  That is, do you think I have something configured amiss?   
Or is it likely just to be I'm expecting too much of our hardware?   
(The real system will have beefier Suns with Opteron-based CPUs, more  
memory, etc)


With thanks,
Mike B-)

PS.  I forgot to say: despite all this activity there's not a single  
error logged by Dovecot in the (admittedly short at this stage) test  
runs.


--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] imaptest10 and stalled messages

2007-03-28 Thread Mike Brudenell

On 28 Mar 2007, at 18:15, Timo Sirainen wrote:

- 16. stalled for 16 secs in LOGIN


Try changing http://wiki.dovecot.org/LoginProcess settings, especially
login_process_per_connection=no and see how much it helps.



Sorry I forgot to give my current login and auth settings (which, I  
have to admit, I thought I'd tuned to be reasonable/generous for the  
eventual environment)...


login_process_size = 64
login_process_per_connection = no
login_processes_count = 10
#login_max_processes_count = 128
#login_max_connections = 256

#auth_process_size = 256
auth_cache_size = 1024
auth_cache_ttl = 600
#auth_worker_max_count = 30

auth default {
  passdb shadow {
# [blocking=yes] - See userdb passwd for explanation
#args =
  }

  userdb passwd {
# [blocking=yes] - By default the lookups are done in the main  
dovecot-auth
# process. This setting causes the lookups to be done in auth  
worker

# proceses. Useful with remote NSS lookups that may block.
# NOTE: Be sure to use this setting with nss_ldap or users might  
get

# logged in as each others!
#args =
  }
}



Also you could use logout=0 parameter to imaptest to avoid the
login/logout overhead.


Re-running the test with logout=0 didn't help :-( ...

% ./imaptest10 user=test%03d host=testserver.imap.york.ac.uk  
clients=50 mbox=./dovecot.mbox msgs=1000 secs=30 logout=0

Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe
100%  50%  50% 100% 100% 100%  50% 100% 100% 100%
  30%  5%
   5115100000  25/ 25 [50%]
   3202000000  34/ 34 [68%]
   3123000000  43/ 43 [86%]
   1112000000  46/ 46 [92%]
   2113011111  50/ 50
   1000000000  50/ 50
   2213000000  50/ 50
   2111100000  50/ 50
   0101000000  50/ 50
   1111010111  50/ 50
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe
100%  50%  50% 100% 100% 100%  50% 100% 100% 100%
  30%  5%
   2013100000  50/ 50
   1001000000  50/ 50
   1111000000  50/ 50
   1000000000  50/ 50
   1122010111  50/ 50
- 12. stalled for 16 secs in LOGIN
- 13. stalled for 16 secs in LOGIN
- 15. stalled for 16 secs in LOGIN
   1012000000  50/ 50
- 12. stalled for 17 secs in LOGIN
- 13. stalled for 17 secs in LOGIN
- 15. stalled for 17 secs in LOGIN
- 17. stalled for 16 secs in LOGIN
- 19. stalled for 16 secs in LOGIN
- 20. stalled for 16 secs in LOGIN
- 21. stalled for 16 secs in LOGIN
- 24. stalled for 16 secs in LOGIN
...

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




[Dovecot] Preparing for sharing with ACLs

2007-03-22 Thread Mike Brudenell

Greetings -

I'm finalising the layout of our new mailstore ready for a trial  
service using Dovecot (switching from the UW IMAP server).  This is  
using Maildir mailboxes, changing from our current mix of MBX and  
traditional Berkeley.


One of the things we are often asked for is how someone can grant  
another access to their mailbox: eg, a Head of Department wants the  
Departmental Secretary to review and reply to e-mails, but not tell  
her the password.


I understand that Dovecot doesn't provide a user-interface for  
setting up or manipulating these, nor the IMAP ACL extension at this  
time, so...


Q1.  Are there plans to add support for ACLs in the future, along  
with an
 end-user accessible means of setting these up and manipulating  
them?


I also understand that it is currently possible for the Mail Admin to  
set up ACLs (globally and/or per-mailbox) and shared folders (I admit  
I'm having trouble getting my head around the latter in the Wiki a bit).


I' hoping to avoid using the current has to be done by the  
Administrator setup, and instead want to plan for any future end- 
user interface.


We are using filestore quotas for the Maildirs, so at present a  
user's Maildir directories and files are owned by their username  
(UNIX uid) and group (UNIX gid).


  * Naturally for filestore quotas to continue to work items need to
continue to be owned by the person's username (UNIX uid).

When end-user support for shared mailboxes and ACLs comes along one  
day (hopefully!) I assume two levels of access control are needed:


 1. At the filestore level the other authorised users will need read
and/or write access to the directories and files comprising the
Maildir, and

2.  Suitable ACLs will be needed to grant access via Dovecot to  
authorised

persons, but not to other random people.

So looking to the future, I'm therefore thinking that instead of  
having each user's Maildir directories and files owned by their UNIX  
uid and gid I should instead have them owned by their UNIX uid and a  
common-to-everyone UNIX gid.  Eg,


drwxrwx---user1:maildirectoryname
-rw-rwuser1:mailfilename

I realise there is an element of risk here, as we would be relying on  
Dovecot's security to limit access so that only authorised users can  
access a given person's mailbox.


Is this the right approach to adopt?
Or is there a better way of (one day) enabling Person A to share  
their mailbox to Person B but not Person C?


(We need a solution that is general and based on ACLs, not one that  
relies on our creating custom UNIX groups and assigning people's  
usernames to these.)


I've read the Wiki pages on ACLs and Shared Folders, but am having  
trouble putting the information together in my mind to (one day)  
solve this particular requirement.  Can anyone shed any light, please?


Cheers,
Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *




Re: [Dovecot] Alerts and pre-authenticated connections

2007-03-15 Thread Mike Brudenell

Many thanks, Timo...

On 15 Mar 2007, at 13:59, Timo Sirainen wrote:


On Thu, 2007-03-15 at 13:47 +, Mike Brudenell wrote:

Can someone explain what I'm doing wrong, please, and how to use the
 dovecot --exec-mail imap
trick to do pre-authenticated connections whilst getting logging out
in the right place?  (Ideally as specified in the configuration file,
and not having to fiddle on manually setting the LOGFILE environment
variable.)


http://dovecot.org/list/dovecot-cvs/2007-March/008169.html


...that does indeed get the logging going to the right place.

Unfortunately starting a pre-authenticated session doesn't actually  
create a log entry to say someone has connected or who it was, or the  
IP address they came from (eg, in the REMOTEHOST shell environment  
variable for an rsh session).  It would be helpful to get something  
logged as for a normal connection, say:


dovecot: Mar 15 14:55:08 Info: imap: Login: user=pmb1,
method=PREAUTH, rip=144.32.226.226, lip=144.32.128.132
   ^^^  ^^--- from REMOTEHOST?

In contrast, closing a pre-authenticated session does log something,  
but only

  imap(pmb1): Info: Disconnected: Logged out

It seems to not be using the mail_log_prefix template which normally  
would log the above with a timestamp and (in my customised  
configuration file) pid:
  dovecot: Mar 15 14:34:39 Info: IMAP(pmb1)[19021]: Disconnected:  
Logged out


This lack of logging is a bit of a pain as we use the IMAP logfiles  
to track down people's reading sessions if we ever need to  
investigate a problem.


Hopeful smile...

Mike B-)

--
The Computing Service, University of York, Heslington, York Yo10 5DD, UK
Tel:+44-1904-433811  FAX:+44-1904-433740

* Unsolicited commercial e-mail is NOT welcome at this e-mail address. *