Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Stan Hoeppner
Noel Butler put forth on 8/6/2010 4:29 PM:

 Actually you will not notice any difference. How do you think all the
 big boys do it now :)  Granted some opted for the SAN approach over NAS,
 but for mail, NAS is better way to go IMHO and plenty of large services,
 ISP, corporations, and universities etc, all use NAS.

The protocol overhead of the NFS stack is such that one way latency is in the
1-50 millisecond range, depending on specific implementations and server load.
 The one way latency of a fibre channel packet is in the sub 100 microsecond
range and is fairly immune to system load.  The performance of fibre channel
is equal to local disk plus approximately one millisecond of additional
effective head seek time due to switch latency, SAN array controller latency,
and latency due to cable length.  A filesystem block served out of SAN array
controller cache returns to the kernel quicker than a block read from local
disk that is not in cache because the former suffers no mechanical latency.
Due to the complexity of the stack, NFS is far slower than either.

Those who would recommend NFS/NAS over fibre channel SAN have no experience
with fibre channel SANs.  I'm no fan of iSCSI SANs due to the reliance on
TCP/IP for transport, and the low performance due to stck processing.
However, using the same ethernet switches for both, iSCSI SAN arrays will also
outperform NFS/NAS boxen by a decent margin.

Regarding the OP's case, given the low cost of new hardware, specifically
locally attached RAID and the massive size and low cost of modern disks, I'd
recommend storing user mail on the new mail host.  It's faster and more cost
effective than both NFS/SAN.  Unless his current backup solution requires
user mail dirs to be on that NFS server for nightly backup, local disk is
definitely the way to go.  Four 300GB 15k SAS drives on a good PCIe RAID card
w/256-512MB cache in a RAID 10 configuration would yield ~350-400MB/s of real
filesystem bandwidth, seek throughput equivalent to a 2 disk stripe--about 600
random seeks/s, 600GB of usable space, ability to sustain two simultaneous
disk failures (assuming 1 failure per mirror pair), and cost effectiveness.

-- 
Stan


Re: [Dovecot] dovecot 2.0 rc4, doveadm: referenced symbol not found

2010-08-07 Thread Schmidt

Am 06.08.2010 18:09, schrieb Timo Sirainen:

On Fri, 2010-08-06 at 16:52 +0200, Burckhard Schmidt wrote:


/usr/dovecot-2/bin/doveadm -Dv expunge -u userx mailbox AutoCleanSpam
savedbefore 30d


You enabled debug output with -D.


doveadm(root): Debug: Loading modules from directory:
/usr/dovecot-2/lib/dovecot/doveadm
doveadm(root): Error:
dlopen(/usr/dovecot-2/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so) 
failed:
ld.so.1: doveadm: fatal: relocation error: file
/usr/dovecot-2/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
symbol expire_set_init: referenced symbol not found


This is debug output saying that you don't have expire plugin enabled.


Isn't this sufficient:
dovecot -n

# 2.0.rc4: /usr/dovecot-2/etc/dovecot/dovecot.conf
# OS: SunOS 5.10 sun4v
...
dict {
  expire = sqlite:/usr/dovecot-2/etc/dovecot/dovecot-dict-sql.conf.ext
}
...
plugin {
...
  expire = AutoCleanSpam
  expire2 = test
  expire_dict = proxy::expire
...
}
...
protocol lda {
  mail_plugins = autocreate sieve expire
}
protocol imap {
  mail_plugins = autocreate acl imap_acl expire
}



Of course, it's shown with Error: prefix.. I'm getting tired of people
reporting this,

sorry

 so I suppose I'll have to change the message..





--
regards --- Burckhard Schmidt


Re: [Dovecot] dovecot 2.0 rc4, doveadm: referenced symbol not found

2010-08-07 Thread Pascal Volk
On 08/07/2010 01:06 PM Schmidt wrote:
 Am 06.08.2010 18:09, schrieb Timo Sirainen:
 This is debug output saying that you don't have expire plugin enabled.
 
 Isn't this sufficient:
 dovecot -n
 
 # 2.0.rc4: /usr/dovecot-2/etc/dovecot/dovecot.conf
 # OS: SunOS 5.10 sun4v
 ...
 dict {
expire = sqlite:/usr/dovecot-2/etc/dovecot/dovecot-dict-sql.conf.ext
 }
 ...
 plugin {
 ...
expire = AutoCleanSpam
expire2 = test
expire_dict = proxy::expire
 ...
 }
 ...
 protocol lda {
mail_plugins = autocreate sieve expire
 }
 protocol imap {
mail_plugins = autocreate acl imap_acl expire
 }

No, it's not sufficient. You are using doveadm, not lda/imap.
Doveadm can only use plugins from the global mail_plugins setting.
You could set something like:

mail_plugins = autocreate expire
protocol lda {
  mail_plugins = $mail_plugins sieve
}
protocol imap {
  mail_plugins = $mail_plugins acl imap_acl
}


Regards,
Pascal
-- 
The trapper recommends today:
http://kopfkrebs.de/mitarbeiter/mitarbeiter_der_woche.html


Re: [Dovecot] dovecot 2.0 rc4, doveadm: referenced symbol not found

2010-08-07 Thread Schmidt

Am 07.08.2010 13:13, schrieb Pascal Volk:

On 08/07/2010 01:06 PM Schmidt wrote:

Am 06.08.2010 18:09, schrieb Timo Sirainen:

This is debug output saying that you don't have expire plugin enabled.



No, it's not sufficient. You are using doveadm, not lda/imap.
Doveadm can only use plugins from the global mail_plugins setting.
You could set something like:

mail_plugins = autocreate expire


ok, thanks!


protocol lda {
   mail_plugins = $mail_plugins sieve
}
protocol imap {
   mail_plugins = $mail_plugins acl imap_acl
}


Regards,
Pascal



--
regards --- Burckhard Schmidt



Re: [Dovecot] Maildir over NFS

2010-08-07 Thread CJ Keist

All,
Thanks for all the information.  I think I'm leaning towards 
locally attached fiber disk array.  Couple of advantages I see, one it 
will be faster than NFS, second it will allow us to separate user home 
directory disk quotas and email disk quotas. Something we have been 
wanting to do for awhile.


Again thanks for all the view points and experiences with Maildir over NFS.

On 8/7/10 4:06 AM, Stan Hoeppner wrote:

Noel Butler put forth on 8/6/2010 4:29 PM:

   

Actually you will not notice any difference. How do you think all the
big boys do it now :)  Granted some opted for the SAN approach over NAS,
but for mail, NAS is better way to go IMHO and plenty of large services,
ISP, corporations, and universities etc, all use NAS.
 

The protocol overhead of the NFS stack is such that one way latency is in the
1-50 millisecond range, depending on specific implementations and server load.
  The one way latency of a fibre channel packet is in the sub 100 microsecond
range and is fairly immune to system load.  The performance of fibre channel
is equal to local disk plus approximately one millisecond of additional
effective head seek time due to switch latency, SAN array controller latency,
and latency due to cable length.  A filesystem block served out of SAN array
controller cache returns to the kernel quicker than a block read from local
disk that is not in cache because the former suffers no mechanical latency.
Due to the complexity of the stack, NFS is far slower than either.

Those who would recommend NFS/NAS over fibre channel SAN have no experience
with fibre channel SANs.  I'm no fan of iSCSI SANs due to the reliance on
TCP/IP for transport, and the low performance due to stck processing.
However, using the same ethernet switches for both, iSCSI SAN arrays will also
outperform NFS/NAS boxen by a decent margin.

Regarding the OP's case, given the low cost of new hardware, specifically
locally attached RAID and the massive size and low cost of modern disks, I'd
recommend storing user mail on the new mail host.  It's faster and more cost
effective than both NFS/SAN.  Unless his current backup solution requires
user mail dirs to be on that NFS server for nightly backup, local disk is
definitely the way to go.  Four 300GB 15k SAS drives on a good PCIe RAID card
w/256-512MB cache in a RAID 10 configuration would yield ~350-400MB/s of real
filesystem bandwidth, seek throughput equivalent to a 2 disk stripe--about 600
random seeks/s, 600GB of usable space, ability to sustain two simultaneous
disk failures (assuming 1 failure per mirror pair), and cost effectiveness.

   


--
C. J. Keist Email: cj.ke...@colostate.edu
UNIX/Network ManagerPhone: 970-491-0630
Engineering Network ServicesFax:   970-491-5569
College of Engineering, CSU
Ft. Collins, CO 80523-1301

All I want is a chance to prove 'Money can't buy happiness'



Re: [Dovecot] dot-containing foldernames \HasNoChildren bug ?

2010-08-07 Thread Samuel Kvasnica
 Hi Timo,

in addition to my last message regarding public folders I did some more
testing on the shared folder issue.

It seems like there are 2 separate issues, both related to misbehaving
LSUB command but different.

For a shared folder a command like e.g.:

, lsub  Shared/dot.user/dot_share

will return nothing, even if this folder is subscribed because
list_escape (out of some strange reason) gets an empty prefix and
wrongly escapes the user name dot.user to dot\2euser.

For the LIST command on other hand it works, because it first fails with
zero prefix but then all namespaces are probed which
will succeed for prefix Shared/dot.user. The difference seems to be
the condition in cmd-list.c, list_namespace_init()
testing for some subscription flags.

Could you give some hint, whether a quick workaround is possible ? I
have a feeling the listescape plugin is implemented at wrong place,
somewhere directly above the storage stuff or as an virtual storage
interface would be cleaner ?

regards,

Sam

On 08/06/2010 10:40 PM, SK wrote:
 On 08/06/2010 09:04 PM, Timo Sirainen wrote:
 On Fri, 2010-08-06 at 20:41 +0200, SK wrote:
   
 Ok, now it does not crash anymore,  But...: I cannot see shared folders
 for users containing a dot (e.g. the shared.user above)
 on open-exchange webclient. If I disable the listscape plugin, the
 folder is visible. Something more is still broken. Any idea where to look ?
 
 I'm not sure if this is fixable. I'll look into it later..

   
 Well, that would be an essential showstopper for the migration from
 scalix to dovecot in our setup.
 But I don't believe it is unfixable. I did some tcpdump logging in the
 meantime, The open-xchange client
 seems to rely heavily on the LSUB command which is different from e.g.
 thunderbird which seems to use mostly LIST.

 It appears that the dovecot replies for LSUB commands with '%' wildcard
 are different when listescape is enabled.
 Please have a look at the attached comparison. At least for the public
 folders it is pretty clear (even when not using dots there) and
 subfolders in public area are broken as well. I cannot find the clear
 difference for the shared folder case though, but more info comes later
 on... the public folder bug might be the key...



[Dovecot] quotactl failed with disk quotas and dovecot 2.0

2010-08-07 Thread Patrick McLean
Hi,

I have a machine running dovecot 2.0 rc4 and I can't seem to get the
imap_quota plugin working, every time I try to read the quotas, I get
the following in my log:

 Aug  7 11:58:43 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:53 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:54 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory

(Once for each try)

Here is the output of dovecot -n:
 # 2.0.rc4: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.34-gentoo-r3 i686 Gentoo Base System release 2.0.1 
 listen = *
 mail_location = mdbox:~/.mdbox
 mail_plugins = acl quota zlib trash imap_quota
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = comparator-i;octet comparator-i;ascii-casemap 
 fileinto reject envelope encoded-character vacation subaddress 
 comparator-i;ascii-numeric relational regex imap4flags copy include variables 
 body enotify environment mailbox date spamtest spamtestplus virustest
 passdb {
   args = *
   driver = pam
 }
 plugin {
   quota = fs:User quota:user
   quota_warning = storage=80%% quota-warning 80 %u
   sieve = ~/.dovecot.sieve
   sieve_dir = ~/.sieve
 }
 protocols = imap lmtp
 service auth {
   unix_listener /var/spool/postfix/private/auth {
 group = postfix
 mode = 0660
 user = postfix
   }
 }
 ssl_cert = /etc/ssl/dovecot/server.pem
 ssl_key = /etc/ssl/dovecot/server.key
 userdb {
   driver = passwd
 }
 verbose_proctitle = yes
 protocol lda {
   mail_plugins = sieve quota
 }


Re: [Dovecot] dot-containing foldernames \HasNoChildren bug ?

2010-08-07 Thread Samuel Kvasnica
 Hi Timo,

since I could not just let it be, i found a workaround, could you review
the patch in attachment ?

I'm going to look into the second subfolder bug...



On 08/07/2010 05:36 PM, Samuel Kvasnica wrote:
  Hi Timo,

 in addition to my last message regarding public folders I did some more
 testing on the shared folder issue.

 It seems like there are 2 separate issues, both related to misbehaving
 LSUB command but different.

 For a shared folder a command like e.g.:

 , lsub  Shared/dot.user/dot_share

 will return nothing, even if this folder is subscribed because
 list_escape (out of some strange reason) gets an empty prefix and
 wrongly escapes the user name dot.user to dot\2euser.

 For the LIST command on other hand it works, because it first fails with
 zero prefix but then all namespaces are probed which
 will succeed for prefix Shared/dot.user. The difference seems to be
 the condition in cmd-list.c, list_namespace_init()
 testing for some subscription flags.

 Could you give some hint, whether a quick workaround is possible ? I
 have a feeling the listescape plugin is implemented at wrong place,
 somewhere directly above the storage stuff or as an virtual storage
 interface would be cleaner ?

 regards,

 Sam


--- listescape-plugin.c.orig2010-08-06 20:02:57.0 +0200   
+++ listescape-plugin.c 2010-08-07 19:19:23.059376844 +0200   
@@ -60,23 +63,28 @@   
str_append_n(esc, str, i);
str += i; 
} 
- 
if (*str == '~') {
str_printfa(esc, %c%02x, mlist-escape_char, *str); 
str++;
} 
+i_debug(list_escape entry, #3, sep=%c, i=%i, str=%s,ns-sep,i,str);
for (; *str != '\0'; str++) { 
if (*str == ns-sep) {
if (!vname)   
str_append_c(esc, ns-list-hierarchy_sep);   
else  
str_append_c(esc, *str);  
-   } else if (*str == ns-list-hierarchy_sep || 
-  *str == mlist-escape_char || *str == '/') 
+   } else {  
+   if (*str == ns-list-hierarchy_sep || ( mlist  // SK mlist was zero check added !  
+  *str == mlist-escape_char) || *str == '/'){   

Re: [Dovecot] quotactl failed with disk quotas and dovecot 2.0

2010-08-07 Thread Patrick McLean
This is on the root filesystem on the machine, if that has any bearing
on the problem.

On 07/08/10 12:03 PM, Patrick McLean wrote:
 Hi,
 
 I have a machine running dovecot 2.0 rc4 and I can't seem to get the
 imap_quota plugin working, every time I try to read the quotas, I get
 the following in my log:
 
 Aug  7 11:58:43 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:53 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:54 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 
 (Once for each try)
 
 Here is the output of dovecot -n:
 # 2.0.rc4: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.34-gentoo-r3 i686 Gentoo Base System release 2.0.1 
 listen = *
 mail_location = mdbox:~/.mdbox
 mail_plugins = acl quota zlib trash imap_quota
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = comparator-i;octet comparator-i;ascii-casemap 
 fileinto reject envelope encoded-character vacation subaddress 
 comparator-i;ascii-numeric relational regex imap4flags copy include 
 variables body enotify environment mailbox date spamtest spamtestplus 
 virustest
 passdb {
   args = *
   driver = pam
 }
 plugin {
   quota = fs:User quota:user
   quota_warning = storage=80%% quota-warning 80 %u
   sieve = ~/.dovecot.sieve
   sieve_dir = ~/.sieve
 }
 protocols = imap lmtp
 service auth {
   unix_listener /var/spool/postfix/private/auth {
 group = postfix
 mode = 0660
 user = postfix
   }
 }
 ssl_cert = /etc/ssl/dovecot/server.pem
 ssl_key = /etc/ssl/dovecot/server.key
 userdb {
   driver = passwd
 }
 verbose_proctitle = yes
 protocol lda {
   mail_plugins = sieve quota
 }


[Dovecot] piegonhole seg fault with NULL user

2010-08-07 Thread Eray Aslan
dovecot-2.0-piegonhole commit cac6acdc4d0e:

Crash when USER is NULL.  Backtrace is below.  Perhaps, we should check
for NULL and bail out early?

Eray


[...]
(gdb) cont
Continuing.

Program received signal SIGSEGV, Segmentation fault.
t_strcut (str=0x0, cutchar=64 '@') at strfuncs.c:277
277 for (p = str; *p != '\0'; p++) {
(gdb) bt full
#0  t_strcut (str=0x0, cutchar=64 '@') at strfuncs.c:277
p = value optimized out
#1  0x4009c28b in get_var_expand_table (service=value optimized out,
input=value optimized out) at mail-storage-service.c:478
static_tab = {{key = 117 'u', value = 0x0, long_key = 0x400f12ce
user}, {key = 110 'n', value = 0x0,
long_key = 0x400f0966 username}, {key = 100 'd', value =
0x0, long_key = 0x400f096f domain}, {key = 115 's',
value = 0x0, long_key = 0x400f129c service}, {key = 108
'l', value = 0x0, long_key = 0x400f0976 lip}, {
key = 114 'r', value = 0x0, long_key = 0x400f097a rip},
{key = 112 'p', value = 0x0, long_key = 0x400f097e pid}, {
key = 105 'i', value = 0x0, long_key = 0x400f8aec uid},
{key = 0 '\000', value = 0x0, long_key = 0x0}}
tab = 0x85f737c
#2  0x4009c364 in user_expand_varstr (service=0x8603cf4, input=value
optimized out, str=0x8603e88 0)
at mail-storage-service.c:501
ret = 0x85f71f0
__FUNCTION__ = user_expand_varstr
#3  0x4009c8fc in mail_storage_service_next (ctx=0x8603470,
user=0x8603cf0, mail_user_r=0x8601e40) at mail-storage-service.c:835
user_set = 0x8603d90
home = value optimized out
chroot = value optimized out
error = value optimized out
len = value optimized out
temp_priv_drop = value optimized out
#4  0x4009dde4 in mail_storage_service_lookup_next (ctx=0x8603470,
input=0xbfb98524, user_r=0x8601e3c, mail_user_r=0x8601e40,
error_r=0xbfb9855c) at mail-storage-service.c:925
user = 0x8603cf0
ret = value optimized out
#5  0x08055e35 in sieve_tool_init_finish (tool=0x8601e10) at
sieve-tool.c:210
storage_service_flags = 144
service_input = {module = 0x80586c7 mail, service = 0x8601e58
testsuite, username = 0x0, local_ip = {family = 0, u = {
  ip6 = {__in6_u = {__u6_addr8 = '\000' repeats 15 times,
__u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0,
0, 0, 0}}}, ip4 = {s_addr = 0}}}, remote_ip =
{family = 0, u = {ip6 = {__in6_u = {
  __u6_addr8 = '\000' repeats 15 times, __u6_addr16 =
{0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}},
  ip4 = {s_addr = 0}}}, userdb_fields = 0x0}
username = value optimized out
errstr = 0x4013d4cc \205\300u%\203\304\020[^]\303\307D$\004
#6  0x08054f5b in main (argc=2, argv=0x85ff1c0) at testsuite.c:147
svinst = 0x4001da98
scriptfile = 0x85f7058 ./tests/testsuite.svtest
dumpfile = 0x0
tracefile = 0x0
tr_config = {level = SIEVE_TRLVL_ACTIONS, flags = 0}
sbin = value optimized out
sieve_dir = value optimized out
log_stdout = false
ret = value optimized out
c = value optimized out


Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Maxwell Reid
On Sat, Aug 7, 2010 at 3:06 AM, Stan Hoeppner s...@hardwarefreak.comwrote:

 Noel Butler put forth on 8/6/2010 4:29 PM:

  Actually you will not notice any difference. How do you think all the
  big boys do it now :)  Granted some opted for the SAN approach over NAS,
  but for mail, NAS is better way to go IMHO and plenty of large services,
  ISP, corporations, and universities etc, all use NAS.

 The protocol overhead of the NFS stack is such that one way latency is in
 the
 1-50 millisecond range, depending on specific implementations and server
 load.


Yes, I would say NFS has greater overhead, but it allows for multi system
access where fiber channel does not unless you're using  clustered
filesystems which have their own issues with latency and lock management
it's also worth noting  that the latencies between the storage and mail
processing nodes is an insignificant bottle neck compared to the usual
latencies between the client and mail processing nodes.


 Those who would recommend NFS/NAS over fibre channel SAN have no experience
 with fibre channel SANs.



Bold statement there sir :-)   From a price performance ratio, I'd argue NAS
is far superior and scalable, and generally there is far less management
overhead involved with NAS than with SANs, and if you have a commercial high
end NAS, you don't have to deal with the idosyncracies of the host file
system.

In my previous lives running large scale mail systems handling up to 500k
accounts (I work with a team which manages an infrastructure much larger
than that now) The price of latency for a single node using NAS flattens out
as the number of nodes increase.  If you're handling a smaller system with
one or two nodes and don't plan or growing significantly, DAS or SAN should
be fine.



~Max


[Dovecot] OT Thunderbird

2010-08-07 Thread Benny Pedersen


some senders on this maillist use a version of thunderbird that adds  
more then one references header and this imho breaks threading :(


--
xpoint



Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Noel Butler
On Sat, 2010-08-07 at 09:17 -0600, CJ Keist wrote:

 All,
  Thanks for all the information.  I think I'm leaning towards 
 locally attached fiber disk array.  Couple of advantages I see, one it 
 will be faster than NFS, second it will allow us to separate user home 
 directory disk quotas and email disk quotas. Something we have been 
 wanting to do for awhile.
 


You truly wont notice the difference between NAS/SAN in the real
world,  we used to use SAN for mail at an old employers, it adds
slightly more complexity, and with large volumes of mail, you want
things as simple as possible, we found NAS much more reliable, the cost
of the units is the same, as netapps do both nas and san, but, if you do
not intend to expand beyond the single server (with 3K users you got a
long way to go unless you introduce redundancy) then attached fiber disk
array will be a cheaper option,  even a low end FAS 2k series we use for
web was about $30K (to get in this country anyway) obviously much
cheaper in the U.S. 

What you could do, is talk to vendors, explain what your considering,
most often they will send you a loan of devices for a few weeks, so you
can configure your scenarios and run the tests, then evaluate which is
the best bang for buck way to go, buy be wary of their sales pushes, you
know what you want, they don't, they may try upsell you what you'll
never ever need.

That said, the one important thing you need to remember, plan for the
future.

All the best in your ventures
Cheers



Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Noel Butler
On Sat, 2010-08-07 at 15:18 -0700, Maxwell Reid wrote:

 On Sat, Aug 7, 2010 at 3:06 AM, Stan Hoeppner s...@hardwarefreak.comwrote:
 
  Noel Butler put forth on 8/6/2010 4:29 PM:
 
   Actually you will not notice any difference. How do you think all the
   big boys do it now :)  Granted some opted for the SAN approach over NAS,
   but for mail, NAS is better way to go IMHO and plenty of large services,
   ISP, corporations, and universities etc, all use NAS.
 
  The protocol overhead of the NFS stack is such that one way latency is in
  the
  1-50 millisecond range, depending on specific implementations and server
  load.
 
 
 Yes, I would say NFS has greater overhead, but it allows for multi system
 access where fiber channel does not unless you're using  clustered
 filesystems which have their own issues with latency and lock management
 it's also worth noting  that the latencies between the storage and mail
 processing nodes is an insignificant bottle neck compared to the usual
 latencies between the client and mail processing nodes.
 
 

*nods*

Thats why my very first line said  ' will not  notice  any difference
'
  

  Those who would recommend NFS/NAS over fibre channel SAN have no experience
  with fibre channel SANs.
 
 
 
 Bold statement there sir :-)   From a price performance ratio, I'd argue NAS
 is far superior and scalable, and generally there is far less management


and with large mail systems, scalability is what it is all about

Cheers



[Dovecot] expires dovecot 1.2.13

2010-08-07 Thread Jerrale G

/usr/sbin/dovecot --exec-mail ext /usr/libexec/dovecot/expire-tool

Error: dlopen(/usr/lib64/dovecot/imap/lib11_imap_quota_plugin.so) 
failed: /usr/lib64/dovecot/imap/lib11_imap_quota_plugin.so: undefined 
symbol: capability_string

Fatal: Couldn't load required plugins


[Dovecot] dovecot.conf: mechanisms = plain login cram-md5 | Windows Live Mail: CRAM-MD5 authentication failed. This could (NOT) be due to a lack of memory on your system

2010-08-07 Thread Jerrale G

/etc/dovecot.conf:

auth default {
mechanisms=plain login cram-md5
passdb {
#..

Windows Live Mail:
CRAM-MD5 authentication failed. This could be due to a lack of memory on 
your system.
Your IMAP command could not be sent to the server, due to non-network 
errors. This could, for example, indicate a lack of memory on your system.


Configuration:
   Account: Sheltoncomputers (testuser)
   Server: mail.sheltoncomputers.com
   User name: testu...@sheltoncomputers.com
   Protocol: IMAP
   Port: 993
   Secure(SSL): 1
   Code: 800cccdf

The console I'm using is 4 GB ram; so, this dumb error of windoze dead 
mail is irrelevant. The other mechanisms of TLS/no tls plain login work 
fine. The passwords are stored in mysql as md5(password) but this works 
on others not using cram-md5 (secure login of the client). I'm trying to 
support a plethora of mechanisms for the convenience of the customer and .


Jerrale G.
Senior Admin


Re: [Dovecot] dovecot.conf: mechanisms = plain login cram-md5 | Windows Live Mail: CRAM-MD5 authentication failed. This could (NOT) be due to a lack of memory on your system

2010-08-07 Thread Gary V
On 8/7/10, Jerrale G wrote:
 /etc/dovecot.conf:

 auth default {
 mechanisms=plain login cram-md5
passdb {
 #..

 Windows Live Mail:
 CRAM-MD5 authentication failed. This could be due to a lack of memory on
 your system.
 Your IMAP command could not be sent to the server, due to non-network
 errors. This could, for example, indicate a lack of memory on your system.

 Configuration:
   Account: Sheltoncomputers (testuser)
   Server: mail.sheltoncomputers.com
   User name: testu...@sheltoncomputers.com
   Protocol: IMAP
   Port: 993
   Secure(SSL): 1
   Code: 800cccdf

 The console I'm using is 4 GB ram; so, this dumb error of windoze dead mail
 is irrelevant. The other mechanisms of TLS/no tls plain login work fine. The
 passwords are stored in mysql as md5(password) but this works on others not
 using cram-md5 (secure login of the client). I'm trying to support a
 plethora of mechanisms for the convenience of the customer and .

 Jerrale G.
 Senior Admin


I'm no expert, but if I'm not mistaken, cram-md5 requires a plain text
shared secret. I quote from
http://www.sendmail.org/~ca/email/cyrus2/components.html:

Shared Secret Mechanisms - For these mechanisms, such as CRAM-MD5,
DIGEST-MD5, and SRP, there is a shared secret between the server and
client (e.g. a password). However, in this case the password itself
does not travel on the wire. Instead, the client passes a server a
token that proves that it knows the secret (without actually sending
the secret across the wire). For these mechanisms, the server
generally needs a plaintext equivalent of the secret to be in local
storage (not true for SRP).

The auth default section of my dovecot.conf looks like:

auth default {
  mechanisms = plain login cram-md5
  passdb sql {
args = /etc/dovecot/dovecot-sql.conf
  }
  passdb sql {
args = /etc/dovecot/dovecot-crammd5.conf
  }
  userdb sql {
args = /etc/dovecot/dovecot-sql.conf
  }
  user = root
  socket listen {
master {
  path = /var/run/dovecot/auth-master
  mode = 0600
  user = vmail
}
client {
  path = /var/spool/postfix/private/auth
  mode = 0660
  user = postfix
  group = postfix
}
  }
}


With an /etc/dovecot/dovecot-crammd5.conf that might look something like this:

driver = mysql
connect = host=127.0.0.1 dbname=postfix user=postfix password=password
default_pass_scheme = PLAIN
password_query = SELECT clear AS password FROM mailbox WHERE username
= '%u' AND active = '1'

With an added field to store a plain text password (I called it clear).

-- 
Gary V


Re: [Dovecot] dovecot.conf: mechanisms = plain login cram-md5 | Windows Live Mail: CRAM-MD5 authentication failed. This could (NOT) be due to a lack of memory on your system

2010-08-07 Thread Jerrale G

On 8/7/2010 11:38 PM, Gary V wrote:

On 8/7/10, Jerrale G wrote:
   

/etc/dovecot.conf:

auth default {
mechanisms=plain login cram-md5
passdb {
#..

Windows Live Mail:
CRAM-MD5 authentication failed. This could be due to a lack of memory on
your system.
Your IMAP command could not be sent to the server, due to non-network
errors. This could, for example, indicate a lack of memory on your system.

Configuration:
   Account: Sheltoncomputers (testuser)
   Server: mail.sheltoncomputers.com
   User name: testu...@sheltoncomputers.com
   Protocol: IMAP
   Port: 993
   Secure(SSL): 1
   Code: 800cccdf

The console I'm using is 4 GB ram; so, this dumb error of windoze dead mail
is irrelevant. The other mechanisms of TLS/no tls plain login work fine. The
passwords are stored in mysql as md5(password) but this works on others not
using cram-md5 (secure login of the client). I'm trying to support a
plethora of mechanisms for the convenience of the customer and .

Jerrale G.
Senior Admin

 

I'm no expert, but if I'm not mistaken, cram-md5 requires a plain text
shared secret. I quote from
http://www.sendmail.org/~ca/email/cyrus2/components.html:

Shared Secret Mechanisms - For these mechanisms, such as CRAM-MD5,
DIGEST-MD5, and SRP, there is a shared secret between the server and
client (e.g. a password). However, in this case the password itself
does not travel on the wire. Instead, the client passes a server a
token that proves that it knows the secret (without actually sending
the secret across the wire). For these mechanisms, the server
generally needs a plaintext equivalent of the secret to be in local
storage (not true for SRP).

The auth default section of my dovecot.conf looks like:

auth default {
   mechanisms = plain login cram-md5
   passdb sql {
 args = /etc/dovecot/dovecot-sql.conf
   }
   passdb sql {
 args = /etc/dovecot/dovecot-crammd5.conf
   }
   userdb sql {
 args = /etc/dovecot/dovecot-sql.conf
   }
   user = root
   socket listen {
 master {
   path = /var/run/dovecot/auth-master
   mode = 0600
   user = vmail
 }
 client {
   path = /var/spool/postfix/private/auth
   mode = 0660
   user = postfix
   group = postfix
 }
   }
}


With an /etc/dovecot/dovecot-crammd5.conf that might look something like this:

driver = mysql
connect = host=127.0.0.1 dbname=postfix user=postfix password=password
default_pass_scheme = PLAIN
password_query = SELECT clear AS password FROM mailbox WHERE username
= '%u' AND active = '1'

With an added field to store a plain text password (I called it clear).

   
I guess I was just wondering how I had the md5 in mysql working and I'm 
aware of the salt sometimes required for md5 but only digest-md5. I 
realized I had guessed correctly on initial setup to have, in 
mysql.conf, default_pass_scheme = MD5 ; I incorrectly thought cram-md5 
had to be as one of the auth default mechanisms to read md5 from mysql 
correctly.


I guess I need to create a new auth crammd5 {} and setup mysql to have 
the current password field to bet a function of the new clear field, 
automatically creating the md5 from the clear password field. I will use 
default_password_scheme=CLEAR, fetch from the clear, and setup 
dovecot.conf auth crammd5 with the settings you suggested.


Thanks,

J. G.





Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Stan Hoeppner
CJ Keist put forth on 8/7/2010 10:17 AM:
 All,
 Thanks for all the information.  I think I'm leaning towards locally
 attached fiber disk array.  Couple of advantages I see, one it will be
 faster than NFS, second it will allow us to separate user home directory
 disk quotas and email disk quotas. Something we have been wanting to do
 for awhile.

If you're going to do locally attached storage, why spend the substantial
additional treasure required for a fiber channel array and HBA solution?
You're looking at a minimum of $10k-$20k USD for a 'low end' FC array
solution.  Don't get me wrong, I'm a huge fan of FC SANs, but only when it
makes sense.  And it only makes sense if you have multiple hosts and you're
slicing capacity (and performance) to each host.  So at least get an entry
level Qlogic FC switch so you can attach others hosts in the future, or even
right away, once you realize what you can do with this technology.  If you're
set on the FC path, I recommend these components, all of which I've used and
are fantastic products when great performance and support:

http://www.qlogic.com/Products/SANandDataNetworking/FibreChannelSwitches/Pages/QLogic3800.aspx
http://www.sandirect.com/product_info.php?products_id=1366

http://www.qlogic.com/Products/SANandDataNetworking/FibreChannelAdapters/Pages/QLE2460.aspx
http://www.sandirect.com/product_info.php?cPath=257_260_268products_id=291

http://www.nexsan.com/sataboy.php
http://www.sandirect.com/product_info.php?cPath=171_208_363products_id=1434

Configure the first 12 of the 14 drives in the Nexsan as a RAID 1+0 array, the
last two drives as hot spares.  This will give you 6TB of usable array space
sliceable to hosts as you see fit, 600 MB/s of sequential read throughput,
~1000 random seeks/sec to disk and 35k/s to cache, ultra fast rebuilds after
drive failure (10x faster than a RAID 5 or 6 rebuild).  RAID 1+0 does not
suffer the mandatory RAID 5/6 read-modify-write cycle and thus the write
throughput of RAID 1+0 has a 4:1 advantage over RAID 5 and an 8:1 advantage
over RAID 6, if my math is correct.

The only downside of RAID 1+0 compared to RAID 5/6 is usable space after
redundancy overhead.  With our Nexsan unit above, RAID 5 with two hot spares
will give us 11TB of usable space and RAID 6 will give us 10TB.  Most people
avoid RAID 5 these days because of the write hole silent data corruption
issue, and go with RAID 6 instead, because they want to maximize their usable
array space.

One of the nice things about the Nexsan is that you can mix  match RAID
levels within the same chassis.  Let's say you're wanting to consolidate the
Postfix queues, INBOX and user maildir files, _and_ user home directories onto
your new Nexsan array.  You want faster performance for files that are often
changing but you don't need as much total storage for these.  You want more
space for user home dirs but you don't need the fastest access times.

In this case you can create a RAID 1+0 array of the first 6 disks in the
chassis giving you 3TB of fast usable space with highest redundancy for the
mail queue, user INBOX and maildir files.  Take the next 7 disks and create a
RAID 5 array yielding 6TB of usable space, double that of the fast array. We
now have one disk left for a hot space, and this is fine as long as you have
another spare disk or 2 on the shelf.  Speaking of spares, with one 14 drive
RAID 1+0, you could actually gain 1TB of usable storage by using no hot spares
and keeping spares on the shelf.  The likelihood of a double drive failure is
rare, and with RAID 1+0 is would be extremely rare for two failures to occur
in the same mirror pair.

-- 
Stan


Re: [Dovecot] Maildir over NFS

2010-08-07 Thread Stan Hoeppner
Maxwell Reid put forth on 8/7/2010 5:18 PM:
 On Sat, Aug 7, 2010 at 3:06 AM, Stan Hoeppner s...@hardwarefreak.comwrote:
 
 Noel Butler put forth on 8/6/2010 4:29 PM:

 Actually you will not notice any difference. How do you think all the
 big boys do it now :)  Granted some opted for the SAN approach over NAS,
 but for mail, NAS is better way to go IMHO and plenty of large services,
 ISP, corporations, and universities etc, all use NAS.

 The protocol overhead of the NFS stack is such that one way latency is in
 the
 1-50 millisecond range, depending on specific implementations and server
 load.

 
 Yes, I would say NFS has greater overhead, but it allows for multi system
 access where fiber channel does not unless you're using  clustered
 filesystems which have their own issues with latency and lock management

Care to elaborate on this point?  The NFS server sits in user space.  All
cluster filesystem operations take place in kernel space.

Using a FC SAN array with a dovecot farm, disk blocks are read/written at
local disk speeds and latencies.  The only network communication is between
the nodes via a dedicated switch or VLAN with QOS for lock management, which
takes place in the sub 1 millisecond range, still much faster than NFS stack
processing.

Using NFS the dovecot member server file request must traverse the local user
space NFS client to the TCP/IP stack where it is then sent to the user space
NFS server on the remote machine which grabs the file blocks and then ships
them back through the multiple network stack layers.

Again, with FC SAN, it's a direct read/write to, for all practical purposes,
local disk--an FC packet encapsulating SCSI commands over a longer cable, if
you will, maybe through an FC switch hop or two, which are in the microsecond
range.

Dovecot clusters may be simpler to implement using NFS storage servers, but
they are far more performant and scalable using SAN storage and a cluster FS,
assuming the raw performance of the overall SAN/NAS systems is equal, i.e.
electronics complex, #disks and spindle speed, etc.

-- 
Stan


Re: [Dovecot] quotactl failed with disk quotas and dovecot 2.0

2010-08-07 Thread Patrick McLean
Ok, so it appears that this entry in my mtab was causing dovecot to get
confused:

 rootfs on / type rootfs (rw)

So here is a little patch to make dovecot ignore filesystems of type
rootfs. This seems to fix the problem for me as it just finds the
proper mtab entry for the root filesystem.

On 08/07/10 14:44, Patrick McLean wrote:
 This is on the root filesystem on the machine, if that has any bearing
 on the problem.
 
 On 07/08/10 12:03 PM, Patrick McLean wrote:
 Hi,

 I have a machine running dovecot 2.0 rc4 and I can't seem to get the
 imap_quota plugin working, every time I try to read the quotas, I get
 the following in my log:

 Aug  7 11:58:43 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:53 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory
 Aug  7 11:59:54 talyn dovecot: imap(chutz): Error: quotactl(Q_GETQUOTA, 
 rootfs) failed: No such file or directory

 (Once for each try)

 Here is the output of dovecot -n:
 # 2.0.rc4: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.34-gentoo-r3 i686 Gentoo Base System release 2.0.1 
 listen = *
 mail_location = mdbox:~/.mdbox
 mail_plugins = acl quota zlib trash imap_quota
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = comparator-i;octet 
 comparator-i;ascii-casemap fileinto reject envelope encoded-character 
 vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
 copy include variables body enotify environment mailbox date spamtest 
 spamtestplus virustest
 passdb {
   args = *
   driver = pam
 }
 plugin {
   quota = fs:User quota:user
   quota_warning = storage=80%% quota-warning 80 %u
   sieve = ~/.dovecot.sieve
   sieve_dir = ~/.sieve
 }
 protocols = imap lmtp
 service auth {
   unix_listener /var/spool/postfix/private/auth {
 group = postfix
 mode = 0660
 user = postfix
   }
 }
 ssl_cert = /etc/ssl/dovecot/server.pem
 ssl_key = /etc/ssl/dovecot/server.key
 userdb {
   driver = passwd
 }
 verbose_proctitle = yes
 protocol lda {
   mail_plugins = sieve quota
 }
diff -ur dovecot-2.0.rc4.orig/src/lib/mountpoint.c 
dovecot-2.0.rc4/src/lib/mountpoint.c
--- dovecot-2.0.rc4.orig/src/lib/mountpoint.c   2010-08-08 01:01:56.0 
-0400
+++ dovecot-2.0.rc4/src/lib/mountpoint.c2010-08-08 01:16:09.0 
-0400
@@ -47,6 +47,12 @@
 #  define MNTTYPE_NFS nfs
 #endif
 
+/* Linux sometimes has mtab entries for rootfs as well as the real root
+ * entries, this causes failures reading the quotas on root
+ */
+#ifndef MNTTYPE_ROOTFS
+#  define MNTTYPE_ROOTFS rootfs
+#endif
 
 int mountpoint_get(const char *path, pool_t pool, struct mountpoint *point_r)
 {
@@ -191,7 +197,8 @@
}
while ((ent = getmntent(f)) != NULL) {
if (strcmp(ent-mnt_type, MNTTYPE_SWAP) == 0 ||
-   strcmp(ent-mnt_type, MNTTYPE_IGNORE) == 0)
+   strcmp(ent-mnt_type, MNTTYPE_IGNORE) == 0 ||
+   strcmp(ent-mnt_type, MNTTYPE_ROOTFS) == 0)
continue;
 
if (stat(ent-mnt_dir, st2) == 0