[Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Thomas Leuxner
Some change between bf5ae73e9475 and 584bd77c38fd seems to have broken
something in the SSL Handshake. A previously valid server certificate is
deemed invalid by various mail clients.

http://hg.dovecot.org/dovecot-2.1/rev/bf5ae73e9475 works fine while
http://hg.dovecot.org/dovecot-2.1/rev/584bd77c38fd does not.

Regards
Thomas


signature.asc
Description: Digital signature


Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 10.11, Thomas Leuxner wrote:

 Some change between bf5ae73e9475 and 584bd77c38fd seems to have broken
 something in the SSL Handshake. A previously valid server certificate is
 deemed invalid by various mail clients.
 
 http://hg.dovecot.org/dovecot-2.1/rev/bf5ae73e9475 works fine while
 http://hg.dovecot.org/dovecot-2.1/rev/584bd77c38fd does not.

What kind of a certificate do you have? You have an intermediary cert that 
exists only in ssl_ca file? I couldn't reproduce this with a test. But anyway, 
reverted for now: http://hg.dovecot.org/dovecot-2.1/rev/f80f18d0ffa3

Now how do I fix the memory leak then?...



Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 10.43, Timo Sirainen wrote:

 On 12.4.2012, at 10.11, Thomas Leuxner wrote:
 
 Some change between bf5ae73e9475 and 584bd77c38fd seems to have broken
 something in the SSL Handshake. A previously valid server certificate is
 deemed invalid by various mail clients.
 
 http://hg.dovecot.org/dovecot-2.1/rev/bf5ae73e9475 works fine while
 http://hg.dovecot.org/dovecot-2.1/rev/584bd77c38fd does not.
 
 What kind of a certificate do you have? You have an intermediary cert that 
 exists only in ssl_ca file? I couldn't reproduce this with a test. But 
 anyway, reverted for now: http://hg.dovecot.org/dovecot-2.1/rev/f80f18d0ffa3
 
 Now how do I fix the memory leak then?...

http://hg.dovecot.org/dovecot-2.1/rev/85ad4baedd43 ?



Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Thomas Leuxner
On Thu, Apr 12, 2012 at 10:43:22AM +0300, Timo Sirainen wrote:
 What kind of a certificate do you have? You have an intermediary cert that 
 exists only in ssl_ca file? I couldn't reproduce this with a test. But 
 anyway, reverted for now: http://hg.dovecot.org/dovecot-2.1/rev/f80f18d0ffa3
 

Thawte. They only do intermediates for some time now.

$ openssl x509 -in /etc/ssl/certs/spectre_leuxner_net_2011.crt -noout -subject 
-issuer -dates
subject= /O=spectre.leuxner.net/OU=Go to 
https://www.thawte.com/repository/index.html/OU=Thawte
SSL123 certificate/OU=Domain Validated/CN=spectre.leuxner.net
issuer= /C=US/O=Thawte, Inc./OU=Domain Validated SSL/CN=Thawte DV SSL CA
notBefore=May 16 00:00:00 2011 GMT
notAfter=Jun 14 23:59:59 2012 GMT

[...]

ssl_ca = /etc/ssl/certs/SSL123_CA_Bundle.pem
ssl_cert = /etc/ssl/certs/spectre_leuxner_net_2011.crt
ssl_key = /etc/ssl/private/spectre_leuxner_net_2011.key


signature.asc
Description: Digital signature


Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 11.16, Thomas Leuxner wrote:

 On Thu, Apr 12, 2012 at 10:43:22AM +0300, Timo Sirainen wrote:
 What kind of a certificate do you have? You have an intermediary cert that 
 exists only in ssl_ca file? I couldn't reproduce this with a test. But 
 anyway, reverted for now: http://hg.dovecot.org/dovecot-2.1/rev/f80f18d0ffa3
 
 
 Thawte. They only do intermediates for some time now.

But do you keep your intermediate cert in ssl_ca file or ssl_cert file?



Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Thomas Leuxner
On Thu, Apr 12, 2012 at 11:17:50AM +0300, Timo Sirainen wrote:
 But do you keep your intermediate cert in ssl_ca file or ssl_cert file?

Separate. Root and intermediate are in ssl_ca:

$ cat /etc/ssl/certs/SSL123_CA_Bundle.pem
-BEGIN CERTIFICATE-
MIIEjzCCA3egAwIBAgIQdhASihe2grs6H50amjXAkjANBgkqhkiG9w0BAQUFADCB
qTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf
Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw
MDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxHzAdBgNV
BAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwHhcNMTAwMjE4MDAwMDAwWhcNMjAw
MjE3MjM1OTU5WjBeMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMVGhhd3RlLCBJbmMu
MR0wGwYDVQQLExREb21haW4gVmFsaWRhdGVkIFNTTDEZMBcGA1UEAxMQVGhhd3Rl
IERWIFNTTCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMuYyTY/
0pzYFgfUSWP5g7DoAi3MXFp0l6YT7xMT3gV8p+bKACPaOfnvE89Sxa+a48q+84LZ
iz2q4cyuiFBmoy3sYRR1SasOJPGsRFsLKKIzIHYeBmBqZwVxi7pmYhZ6s20Nx9CU
QMaMPR6SDGI0DUSJ1feJ/intGI/2mysI92qr2EiXWvSf7Qx1UiL31V6EAJ/ASg0x
d0xk0BLmDzrwocDVXB3nXy3C99Y2GNmVbkROyVgUTbaOu83eYh76W7W9GCuYrKyT
P1Ba9RQLos+2855PWs1awzYj2hqvsE3WSiIDj0MCGb3qrN3EejUyFPFyLghVQAz0
B0FBrzg3hClCslUCAwEAAaOB/DCB+TAyBggrBgEFBQcBAQQmMCQwIgYIKwYBBQUH
MAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wEgYDVR0TAQH/BAgwBgEB/wIBADA0
BgNVHR8ELTArMCmgJ6AlhiNodHRwOi8vY3JsLnRoYXd0ZS5jb20vVGhhd3RlUENB
LmNybDAOBgNVHQ8BAf8EBAMCAQYwKQYDVR0RBCIwIKQeMBwxGjAYBgNVBAMTEVZl
cmlTaWduTVBLSS0yLTExMB0GA1UdDgQWBBSrRORd7IPH2cCFn/fhxpeQsIw/mDAf
BgNVHSMEGDAWgBR7W0XPr87Lev0xkhpqtvNG61dIUDANBgkqhkiG9w0BAQUFAAOC
AQEABLr7rLv8S1QRoy2Iszy9AG2KGraNxMGD+MdTKsEybjqBoVR92ho/OkVPNudC
sApChZegrPvlh6eDT+ixt5tYZW4mgAuSTUdVuWEWUWXpK/Fo2Vi4A4HRt2Yc07zF
pntfPsU4RnbndbSgDEvOosKpwcw2c3v7uSQkoF6n9vq7DChDnh3wTvA/2CSwIdxt
Le6/Wjv6iJx0bK8h3ZLswxXvlHUmRtamP79mSKod790n5rdRiTh9E4QMQPzQtfHg
2/lPL0ActI5HImG4TJbe8F8Rfk8R2exQRyIOxR3iZEnnaGNFOorZcfRe8W63FE0+
bxQe3FL+vN8MvSk/dvsRX2hoFQ==
-END CERTIFICATE-
-BEGIN CERTIFICATE-
MIIERTCCA66gAwIBAgIQM2VQCHmtc+IwueAdDX+skTANBgkqhkiG9w0BAQUFADCB
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl
cnZlckB0aGF3dGUuY29tMB4XDTA2MTExNzAwMDAwMFoXDTIwMTIzMDIzNTk1OVow
gakxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwx0aGF3dGUsIEluYy4xKDAmBgNVBAsT
H0NlcnRpZmljYXRpb24gU2VydmljZXMgRGl2aXNpb24xODA2BgNVBAsTLyhjKSAy
MDA2IHRoYXd0ZSwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MR8wHQYD
VQQDExZ0aGF3dGUgUHJpbWFyeSBSb290IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEArKDw+4BZ1JzHpM+doVlzCRBFDA0sbmjxbFtIaElZN/wLMxnC
d3/MEC2VNBzm600JpxzSuMmXNgK3idQkXwbAzESUlI0CYm/rWt0RjSiaXISQEHoN
vXRmL2o4oOLVVETrHQefB7pv7un9Tgsp9T6EoAHxnKv4HH6JpOih2HFlDaNRe+68
0iJgDblbnd+6/FFbC6+Ysuku6QToYofeK8jXTsFMZB7dz4dYukpPymgHHRydSsbV
L5HMfHFyHMXAZ+sy/cmSXJTahcCbv1N9Kwn0jJ2RH5dqUsveCTakd9h7h1BE1T5u
KWn7OUkmHgmlgHtALevoJ4XJ/mH9fuZ8lx3VnQIDAQABo4HCMIG/MA8GA1UdEwEB
/wQFMAMBAf8wOwYDVR0gBDQwMjAwBgRVHSAAMCgwJgYIKwYBBQUHAgEWGmh0dHBz
Oi8vd3d3LnRoYXd0ZS5jb20vY3BzMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQU
e1tFz6/Oy3r9MZIaarbzRutXSFAwQAYDVR0fBDkwNzA1oDOgMYYvaHR0cDovL2Ny
bC50aGF3dGUuY29tL1RoYXd0ZVByZW1pdW1TZXJ2ZXJDQS5jcmwwDQYJKoZIhvcN
AQEFBQADgYEAhKhMyT4qvJrizI8LsiV3xGGJiWNa1KMVQNT7Xj+0Q+pjFytrmXSe
Cajd1FYVLnp5MV9jllMbNNkV6k9tcMq+9oKp7dqFd8x2HGqBCiHYQZl/Xi6Cweiq
95OBBaqStB+3msAHF/XLxrRMDtdW3HEgdDjWdMbWj2uvi42gbCkLYeA=
-END CERTIFICATE-

$ dovecot --version
2.1.4 (584bd77c38fd)

Seems to have fixed it. Thanks.


signature.asc
Description: Digital signature


Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 11.33, Thomas Leuxner wrote:

 On Thu, Apr 12, 2012 at 11:17:50AM +0300, Timo Sirainen wrote:
 But do you keep your intermediate cert in ssl_ca file or ssl_cert file?
 
 Separate. Root and intermediate are in ssl_ca:

The documentation tells to put the intermediary to ssl_cert though. I didn't 
even know it worked in ssl_ca. But I guess I won't intentionally break it..



Re: [Dovecot] SSL Certificate Anomalies with latest code changes

2012-04-12 Thread Thomas Leuxner
On Thu, Apr 12, 2012 at 11:35:48AM +0300, Timo Sirainen wrote:
 On 12.4.2012, at 11.33, Thomas Leuxner wrote:
 
  On Thu, Apr 12, 2012 at 11:17:50AM +0300, Timo Sirainen wrote:
  But do you keep your intermediate cert in ssl_ca file or ssl_cert file?
  
  Separate. Root and intermediate are in ssl_ca:
 
 The documentation tells to put the intermediary to ssl_cert though. I didn't 
 even know it worked in ssl_ca. But I guess I won't intentionally break it..

Hmmm. I did emulate Thawte instructions though:

https://search.thawte.com/support/ssl-digital-certificates/index?page=contentid=SO15464actp=LISTviewlocale=en_US
https://search.thawte.com/library/VERISIGN/ALL_OTHER/thawte%20ca/SSL123_CA_Bundle.pem

[...]

SSLCertificateFile /usr/local/ssl/crt/domainname.crt
SSLCertificateKeyFile /usr/local/ssl/private/server.key
SSLCACertificateFile /usr/local/ssl/crt/cabundle.crt



signature.asc
Description: Digital signature


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Stan Hoeppner
On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:
 On 4/12/12, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 4/11/2012 11:50 AM, Ed W wrote:
 One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the
 event of bad blocks.  (I'm not sure what actually happens when md
 scrubbing finds a bad sector with raid1..?).  For low performance
 requirements I have become paranoid and been using RAID6 vs RAID10,
 filesystems with sector checksums seem attractive...

 Except we're using hardware RAID1 here and mdraid linear.  Thus the
 controller takes care of sector integrity.  RAID6 yields nothing over
 RAID10, except lower performance, and more usable space if more than 4
 drives are used.
 
 How would the control ensure sector integrity unless it is writing
 additional checksum information to disk? I thought only a few
 filesystems like ZFS does the sector checksum to detect if any data
 corruption occurred. I suppose the controller could throw an error if
 the two drives returned data that didn't agree with each other but it
 wouldn't know which is the accurate copy but that wouldn't protect the
 integrity of the data, at least not directly without additional human
 intervention I would think.

When a drive starts throwing uncorrectable read errors, the controller
faults the drive and tells you to replace it.  Good hardware RAID
controllers are notorious for their penchant to kick drives that would
continue to work just fine in mdraid or as a single drive for many more
years.  The mindset here is that anyone would rather spent $150-$2500
dollars on a replacement drive than take a chance with his/her valuable
data.

Yes I typed $2500.  EMC charges over $2000 for a single Seagate disk
drive with an EMC label and serial# on it.  The serial number is what
prevents one from taking the same off the shelf Seagate drive at $300
and mounting it in a $250,000 EMC array chassis.  The controller
firmware reads the S/N from each connected drive and will not allow
foreign drives to be used.  HP, IBM, Oracle/Sun, etc do this as well.
Which is why they make lots of profit, and is why I prefer open storage
systems.

-- 
Stan


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Ed W

On 12/04/2012 11:20, Stan Hoeppner wrote:

On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:

On 4/12/12, Stan Hoeppners...@hardwarefreak.com  wrote:

On 4/11/2012 11:50 AM, Ed W wrote:

One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the
event of bad blocks.  (I'm not sure what actually happens when md
scrubbing finds a bad sector with raid1..?).  For low performance
requirements I have become paranoid and been using RAID6 vs RAID10,
filesystems with sector checksums seem attractive...

Except we're using hardware RAID1 here and mdraid linear.  Thus the
controller takes care of sector integrity.  RAID6 yields nothing over
RAID10, except lower performance, and more usable space if more than 4
drives are used.

How would the control ensure sector integrity unless it is writing
additional checksum information to disk? I thought only a few
filesystems like ZFS does the sector checksum to detect if any data
corruption occurred. I suppose the controller could throw an error if
the two drives returned data that didn't agree with each other but it
wouldn't know which is the accurate copy but that wouldn't protect the
integrity of the data, at least not directly without additional human
intervention I would think.

When a drive starts throwing uncorrectable read errors, the controller
faults the drive and tells you to replace it.  Good hardware RAID
controllers are notorious for their penchant to kick drives that would
continue to work just fine in mdraid or as a single drive for many more
years.  The mindset here is that anyone would rather spent $150-$2500
dollars on a replacement drive than take a chance with his/her valuable
data.



I'm asking a subtlely different question.

The claim by ZFS/BTRFS authors and others is that data silently bit 
rots on it's own. The claim is therefore that you can have a raid1 pair 
where neither drive reports a hardware failure, but each gives you 
different data?  I can't personally claim to have observed this, so it 
remains someone else's theory...  (for background my experience is 
simply: RAID10 for high performance arrays and RAID6 for all my personal 
data - I intend to investigate your linear raid idea in the future though)


I do agree that if one drive reports a read error, then it's quite easy 
to guess which pair of the array is wrong...


Just as an aside, I don't have a lot of failure experience.  However, 
the few I have had (perhaps 6-8 events now) is that there is a massive 
correlation in failure time with RAID1, eg one pair I had lasted perhaps 
2 years and then both failed within 6 hours of each other. I also had a 
bad experience with RAID 5 that wasn't being scrubbed regularly and when 
one drive started reporting errors (ie lack of monitoring meant it had 
been bad for a while), the rest of the array turned out to be a 
patchwork of read errors - linux raid then turns out to be quite fragile 
in the presence of a small number of read failures and it's extremely 
difficult to salvage the 99% of the array which is ok due to the disks 
getting kicked out... (of course regular scrubs would have prevented 
getting so deep into that situation - it was a small cheap nas box 
without such features)


Ed W



Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 13.58, Ed W wrote:

 The claim by ZFS/BTRFS authors and others is that data silently bit rots on 
 it's own. The claim is therefore that you can have a raid1 pair where neither 
 drive reports a hardware failure, but each gives you different data? 

That's one reason why I planned on adding a checksum to each message in dbox. 
But I forgot to actually do that. I guess I could add it for new messages in 
some upcoming version. Then Dovecot could optionally verify the checksum before 
returning the message to client, and if it detects corruption perhaps 
automatically read it from some alternative location (e.g. if dsync replication 
is enabled ask from another replica). And Dovecot index files really should 
have had some small (8/16/32bit) checksums of stuff as well..



Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Ed W

On 12/04/2012 02:18, Stan Hoeppner wrote:

On 4/11/2012 11:50 AM, Ed W wrote:

Re XFS.  Have you been watching BTRFS recently?

I will concede that despite the authors considering it production ready
I won't be using it for my servers just yet.  However, it's benchmarking
on single disk benchmarks fairly similarly to XFS and in certain cases
(multi-threaded performance) can be somewhat better.  I haven't yet seen
any benchmarks on larger disk arrays yet, eg 6+ disks, so no idea how it
scales up.  Basically what I have seen seems competitive

Links?


http://btrfs.ipv5.de/index.php?title=Main_Page#Benchmarking

See the regular Phoronix benchmarks in particular.  However, I believe 
these are all single disk?




I don't have such hardware spare to benchmark, but I would be interested
to hear from someone who benchmarks your RAID1+linear+XFS suggestion,
especially if they have compared a cutting edge btrfs kernel on the same
array?

http://btrfs.boxacle.net/repository/raid/history/History_Mail_server_simulation._num_threads=128.html

This is with an 8 wide LVM stripe over 8 17 drive hardware RAID0 arrays.
  If the disks had been setup as a concat of 68 RAID1 pairs, XFS would
have turned in numbers significantly higher, anywhere from a 100%
increase to 500%.


My instinct is that this is an irrelevant benchmark for BTRFS because 
its performance characteristics for these workloads have changed so 
significantly?  I would be far more interested in a 3.2 and then a 
3.6/3.7 benchmark in a years time


In particular recent benchmarks on Phoronix show btrfs exceeding XFS 
performance on heavily threaded benchmarks - however, I doubt this is 
representative of performance on a multi-disk benchmark?



It would be nice to see these folks update these
results with a 3.2.6 kernel, as both BTRFS and XFS have improved
significantly since 2.6.35.  EXT4 and JFS have seen little performance
work since.


My understanding is that there was a significant multi-thread 
performance boost for EXT4 in the last year kind of timeframe?  I don't 
have a link to hand, but someone did some work to reduce lock contention 
(??) which I seem to recall made a very large difference on multi-user 
or multi-cpu workloads?  I seem to recall that the summary was that it 
allowed Ext4 to scale up to a good fraction of XFS performance on 
medium sized systems? (I believe that XFS still continues to scale far 
better than anything else on large systems)


Point is that I think it's a bit unfair to say that little has changed 
on Ext4? It still seems to be developing faster than maintenance only


However, well OT...  The original question was: anyone tried very recent 
BTRFS on a multi-disk system.  Seems like the answer is no.  My proposal 
is that it may be worth watching in the future


Cheers

Ed W

P.S.  I have always been intrigued by the idea that a COW based 
filesystem could potentially implement much faster RAID parity, 
because it can avoid reading the whole stripe. The idea is that you 
treat unallocated space as zero, which means you can compute the 
incremental parity with only a read/write of the checksum value (and 
with a COW filesystem you only ever update by rewriting to new zero'd 
space). I had in mind something like a fixed parity disk (RAID4?) and 
allowing the parity disk to be write behind cached in ram (ie exposed 
to risk of: power fails AND data disk fails at the same time).  My code 
may not be following along for a while though...




[Dovecot] doveadm mailbox status destroys compressed messages without W= attribute

2012-04-12 Thread Artur Zaprzała
I still have some old messages from previous mail server. This messages have no 
W= attribute either in file name or dovecot-uidlist and are compressed with 
gzip. Running doveadm mailbox status -A vsize \* will result in the following 
messages:


doveadm(foo@domain): Error: Cached message size larger than expected (2580  
1451)
doveadm(foo@domain): Error: Maildir filename has wrong S value, renamed the file 
from 
/vmail/domain/foo/Maildir/.Sent/cur/1206550323.M125837P13306V0812I809E.oldname,S=2580:2,SZ 
to 
/vmail/domain/foo/Maildir/.Sent/cur/1206550323.M125837P13306V0812I809E.oldname,S=1451:2,SZ
doveadm(foo@domain): Error: Corrupted index cache file 
/vmail/domain/foo/Maildir/.Sent/dovecot.index.cache: Broken physical size for 
mail UID 2

doveadm(foo@domain): Error: Cached message size larger than expected (2580  
1451)
doveadm(foo@domain): Error: Corrupted index cache file 
/vmail/domain/foo/Maildir/.Sent/dovecot.index.cache: Broken physical size for 
mail UID 2
doveadm(foo@domain): Error: 
read(/vmail/domain/foo/Maildir/.Sent/cur/1206550323.M125837P13306V0812I809E.oldname,S=2580:2,SZ) 
failed: Input/output error (uid=2)


(Size of uncompressed message is 2580 and compressed size is 1451)

I have enabled zlib plugin for imap, pop3, lda and lmtp. But how to enable it 
for doveadm?



--
Best regards,
Artur Zaprzała


Re: [Dovecot] doveadm mailbox status destroys compressed messages without W= attribute

2012-04-12 Thread Timo Sirainen
On 12.4.2012, at 14.47, Artur Zaprzała wrote:

 I have enabled zlib plugin for imap, pop3, lda and lmtp. But how to enable it 
 for doveadm?

Just set it globally:

mail_plugins = zlib



Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Ed W

On 12/04/2012 12:09, Timo Sirainen wrote:

On 12.4.2012, at 13.58, Ed W wrote:


The claim by ZFS/BTRFS authors and others is that data silently bit rots on 
it's own. The claim is therefore that you can have a raid1 pair where neither drive 
reports a hardware failure, but each gives you different data?

That's one reason why I planned on adding a checksum to each message in dbox. 
But I forgot to actually do that. I guess I could add it for new messages in 
some upcoming version. Then Dovecot could optionally verify the checksum before 
returning the message to client, and if it detects corruption perhaps 
automatically read it from some alternative location (e.g. if dsync replication 
is enabled ask from another replica). And Dovecot index files really should 
have had some small (8/16/32bit) checksums of stuff as well..



I have to say - I haven't actually seen this happen... Do any of your 
big mailstore contacts observe this, eg rackspace, etc?


I think it's worth thinking about the failure cases before implementing 
something to be honest?  Just sticking in a checksum possibly doesn't 
help anyone unless it's on the right stuff and in the right place?


Off the top of my head:
- Someone butchers the file on disk (disk error or someone edits it with vi)
- Restore of some files goes subtly wrong, eg tool tries to be clever 
and fails, snapshot taken mid-write, etc?

- Filesystem crash (sudden power loss), how to deal with partial writes?


Things I might like to do *if* there were some suitable checksums 
available:
- Use the checksum as some kind of guid either for the whole message, 
the message minus the headers, or individual mime sections
- Use the checksums to assist with replication speed/efficiency (dsync 
or custom imap commands)
- File RFCs for new imap features along the lemonde lines which allow 
clients to have faster recovery from corrupted offline states...
- Single instance storage (presumably already done, and of course this 
has some subtleties in the face of deliberate attack)
- Possibly duplicate email suppression (but really this is an LDA 
problem...)
- Storage backends where emails are redundantly stored and might not ALL 
be on a single server (find me the closest copy of email X) - 
derivations of this might be interesting for compliance archiving of 
messages?
- Fancy key-value storage backends might use checksums as part of the 
key value (either for the whole or parts of the message)


The mail server has always looked like a kind of key-value store to my 
eye.  However, traditional key-value isn't usually optimised for 
streaming reads, hence dovecot seems like a key value store, 
optimised for sequential high speed streaming access to the key 
values...  Whilst it seems increasingly unlikely that a traditional 
key-value store will work well to replace say mdbox, I wonder if it's 
not worth looking at the replication strategies of key-value stores to 
see if those ideas couldn't lead to new features for mdbox?


Cheers

Ed W



[Dovecot] vacation plugins for squirrelmail

2012-04-12 Thread Daminto Lie
Hi,

I am afraid I have a question to ask of you all. I have just completed setting 
up a mail server running on Ubuntu Server 10.04. It has postfix, dovecot 
1.2.19, LDAP and squirrelmail as the webmail. I have also created virtual users 
accounts on the system through LDAP. I can send and receive mails which is 
great. Now, what I am trying to do is to set up a vacation auto-reply on the 
squirrelmail so that users who are about to go on vacation can set it up 
themselves. I was looking around for the vacation plugins for dovecot that I 
can incorporate it into squirrelmail.

Any help would be very much appreciated.

Thank you


Re: [Dovecot] vacation plugins for squirrelmail

2012-04-12 Thread Artur Zaprzała

Daminto Lie wrote:

Hi,

I am afraid I have a question to ask of you all. I have just completed setting 
up a mail server running on Ubuntu Server 10.04. It has postfix, dovecot 
1.2.19, LDAP and squirrelmail as the webmail. I have also created virtual users 
accounts on the system through LDAP. I can send and receive mails which is 
great. Now, what I am trying to do is to set up a vacation auto-reply on the 
squirrelmail so that users who are about to go on vacation can set it up 
themselves. I was looking around for the vacation plugins for dovecot that I 
can incorporate it into squirrelmail.

Any help would be very much appreciated.

Thank you

I'm using Avelsieve 1.9.9 with a set of my own bugfixes: 
http://email.uoa.gr/avelsieve/


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Dirk Jahnke-Zumbusch

Hi there,

 I have to say - I haven't actually seen this happen... Do any of your
 big mailstore contacts observe this, eg rackspace, etc?

Just to throw in to the discussion that with (silent) data corruption
not only the disk is involved but many other parts of your systems.
So perhaps you would like to have a look at

https://indico.desy.de/getFile.py/access?contribId=65sessionId=42resId=0materialId=slidesconfId=257

http://indico.cern.ch/getFile.py/access?contribId=3sessionId=0resId=1materialId=paperconfId=13797

The documents are from 2007 but the principals are still the same.

Kind regards
Dirk



Re: [Dovecot] Problems with Apple Mail and attachments

2012-04-12 Thread Helga Mayer

Quoting Thierry de Montaudry thie...@odry.net:


I've seen a similar problem a while ago (1 year maybe more), but  
used the mailbox Rebuild option on the client, which fixed it  
without having to delete and recreate the account. Your problem  
might just be a local index corruption, which can happen when  
loosing your Internet connection.


Thank you, we will will try it.

Regards
Helga



Helga Mayer
Universität Hohenheim
Kommunikations-, Informations- und Medienzentrum (630)
IT-Dienste | Mail

Schloss-Westhof-Süd | 70599 Stuttgart
Tel.:  +49 711 459-22838 | Fax: +49 711 459-23449
https://kim.uni-hohenheim.de




[Dovecot] Problems with master user

2012-04-12 Thread Andrea Mistrali
Hi to all!
I’m trying to setup master users, but I have some problems. Namely I can 
authenticate, but after it I cannot access INBOX or other mailboxes of the user.

My configuration is:

passdb {
  driver = ldap
  args = /etc/dovecot/ldap-passdb.conf
}

passdb {
  driver = sql
  args = /etc/dovecot/sql.conf
}

passdb {
driver = passwd-file
args = /etc/dovecot/passwd.masterusers
master = yes
pass = yes
}

userdb {
  driver = sql
  args = /etc/dovecot/sql.conf
}

(I look up for auth in LDAP server first, if it fails I look up in DB, else I 
check for master user)

and relevant files are

/etc/dovecot/sql.conf
——
password_query = SELECT fullusername as user, \
 password, \
 uid AS userdb_uid, \
 gid AS userdb_gid, \ 
 home AS userdb_home, \
 mail AS userdb_mail, \
 groups as userdb_acl_groups, \
 quota_rule as userdb_quota_rule \
 FROM pd_users_full WHERE \
 username = '%n' AND \
 domain = '%d' AND \
 external_auth IS FALSE AND \
 master_user IS FALSE AND \
 %Ls_ok IS TRUE

user_query = SELECT fullusername as user, \
 uid, \ 
 gid, \
 home, \
 mail, \
 groups as acl_groups, \
 quota_rule \
 FROM pd_users_full WHERE \
 username = '%n' AND \
 domain = '%d' AND \
 master_user IS FALSE

iterate_query = SELECT fullusername as username, fullusername as user \
FROM pd_users_full where master_user IS FALSE ORDER BY 
domain,username

/etc/dovecot/ldap-passdb.conf
——
uris = ldap://dioniso.cube.lan
base = cn=users,dc=cube,dc=lan
auth_bind = yes
auth_bind_userdn = uid=%n,cn=users,dc=cube,dc=lan

pass_attrs = uid=username, \
 userPassword=password, \
 # uidNumber=userdb_uid, \
 # =userdb_home=/var/mail/cubeholding.com/%Lu, \
 # =userdb_domain=cubeholding.com, \
 # 
=userdb_mail=maildir:~/maildir/:INBOX=~/maildir/INBOX:LAYOUT=fs:INDEX=~/indexes/

pass_filter = ((objectClass=posixAccount)(uid=%n)(mail=*@%d))

# Attributes and filter to get a list of all users
# iterate_attrs = uid=username
iterate_attrs = uid=user
iterate_filter = ((objectClass=posixAccount)(mail=*@%d))


If I test with doveadm auth and doveadm user I receive this:

# doveadm auth -x service=imap an...@am.cx\*mas...@am.cx XX
passdb: an...@am.cx*mas...@am.cx auth succeeded
extra fields:
  user=an...@am.cx

# doveadm user an...@am.cx   
userdb: an...@am.cx
  uid   : 10010
  gid   : 8
  home  : /var/mail/am.cx/andre
  mail  : maildir:~/maildir:INBOX=~/maildir/INBOX:LAYOUT=fs:INDEX=~/indexes/
  acl_groups: 
  quota_rule: *:storage=10G

and in log files I see:

20120412 17:31:26 auth: Info: passdb(mas...@am.cx,master): Master user logging 
in as an...@am.cx
20120412 17:31:26 auth: Info: ldap(an...@am.cx): invalid credentials (given 
password: XX)

but if I try the real thing:

# telnet localhost 143
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE ACL 
QUOTA NAMESPACE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=GSSAPI] Dovecot ready.
0 login an...@am.cx*mas...@am.cx XX
0 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT 
SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN 
NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT 
SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS SPECIAL-USE ACL QUOTA NAMESPACE 
COMPRESS=DEFLATE QUOTA ACL RIGHTS=texk] Logged in
0 select INBOX
0 NO [SERVERBUG] Internal error occurred. Refer to server log for more 
information. [2012-04-12 17:33:15]

and in log file I have:

20120412 17:34:25 auth: Info: passdb(mas...@am.cx,127.0.0.1,master): Master 
user logging in as an...@am.cx
20120412 17:34:25 auth: Info: ldap(an...@am.cx,127.0.0.1): invalid credentials 
(given password: silmaril)
20120412 17:34:25 imap-login: Info: Login: pid=1673, an...@am.cx, 
127.0.0.1/127.0.0.1, PLAIN, secured
20120412 17:34:47 imap(an...@am.cx): Error: Opening INBOX failed: Mailbox 
doesn't exist: INBOX

Can someone tell me what is wrong in my setup?

TIA
A.

Re: [Dovecot] Problems with master user

2012-04-12 Thread Andrea Mistrali

Il giorno 12/apr/2012, alle ore 17.35, Andrea Mistrali ha scritto:

 Hi to all!
 I’m trying to setup master users, but I have some problems. Namely I can 
 authenticate, but after it I cannot access INBOX or other mailboxes of the 
 user.
 
snip
 Can someone tell me what is wrong in my setup?


Solved! It is a problem of ACL as stated at 
http://master.wiki2.dovecot.org/Authentication/MasterUsers#ACLs

Sorry
A.

Re: [Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-12 Thread Root Kev
Hello all,


I hope someone can help me, I have been testing out Dovecot to switch from
popa3d which I use at the moment.  When I get several users connecting and
disconnection multiple times, the Dovecot process with command Auth uses
50-90% of the CPU for the period which they are connecting.  I am wondering
if there is something that I may have misconfigured, or if there is
something that I can change so that this spike doesn't occur.

 If anyone could shed some light on the issue, I would appreciate it,

 Kevin

 /var/mail# dovecot -n
 # 2.1.4: /usr/local/etc/dovecot/dovecot.conf
 # OS: Linux 2.6.32-33-generic-pae i686 Ubuntu 10.04.4 LTS ext4
 auth_cache_size = 10 M
 auth_verbose = yes
 disable_plaintext_auth = no
 instance_name = Mail Popper 1
 listen = 172.20.20.222
 login_greeting = Mail Popper 1 Ready
 mail_location = mbox:/var/empty:INBOX=/var/mail/%u:INDEX=MEMORY
 mail_privileged_group = mail
 namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
 special_use = \Drafts
   }
   mailbox Junk {
 special_use = \Junk
   }
   mailbox Sent {
 special_use = \Sent
   }
   mailbox Sent Messages {
 special_use = \Sent
   }
   mailbox Trash {
 special_use = \Trash
   }
   prefix =
 }
 passdb {
   driver = shadow
 }
 protocols = pop3
 service pop3-login {
   service_count = 0
 }
 ssl = no
 userdb {
   driver = passwd
 }
 protocol pop3 {
   pop3_uidl_format = %08Xu%08Xv
 }


[Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-12 Thread Root Kev
Hello all,

I hope someone can help me, I have been testing out Dovecot to switch from
popa3d which I use at the moment.  When I get several users connecting and
disconnection multiple times, the Dovecot process with command Auth uses
50-90% of the CPU for the period which they are connecting.  I am wondering
if there is something that I may have misconfigured, or if there is
something that I can change so that this spike doesn't occur.

If anyone could shed some light on the issue, I would appreciate it,

Kevin

/var/mail# dovecot -n
# 2.1.4: /usr/local/etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-33-generic-pae i686 Ubuntu 10.04.4 LTS ext4
auth_cache_size = 10 M
auth_verbose = yes
disable_plaintext_auth = no
instance_name = Mail Popper 1
listen = 172.20.20.222
login_greeting = Mail Popper 1 Ready
mail_location = mbox:/var/empty:INBOX=/var/mail/%u:INDEX=MEMORY
mail_privileged_group = mail
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  driver = shadow
}
protocols = pop3
service pop3-login {
  service_count = 0
}
ssl = no
userdb {
  driver = passwd
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}


[Dovecot] [OT] Outlook identities

2012-04-12 Thread Michael Orlitzky
Nothing to do with Dovecot, but I figured this is the best place to ask.

Do any of the newer versions of Outlook have proper identities support
like Thunderbird, mutt, Roundcube, i.e. every other mail client on Earth?

We have customers who set up ten different mailboxes for one person
because otherwise Outlook won't Do the Right Thing. Is there some way to
make it behave like the others?

 * When sending new mail, you can choose which address to use.
 * When replying to mail, it sends from the address that the message
   was sent to by default.
 * All mail winds up in one inbox.

Outlook (2003, 2007) does do this if you set up different mail accounts,
but we shouldn't have to do that.


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-12 Thread Stan Hoeppner
On 4/12/2012 5:58 AM, Ed W wrote:

 The claim by ZFS/BTRFS authors and others is that data silently bit
 rots on it's own. The claim is therefore that you can have a raid1 pair
 where neither drive reports a hardware failure, but each gives you
 different data?

You need to read those articles again very carefully.  If you don't
understand what they mean by 1 in 10^15 bits non-recoverable read error
rate and combined probability, let me know.

And this has zero bearing on RAID1.  And RAID1 reads don't work the way
you describe above.  I explained this in some detail recently.

 I do agree that if one drive reports a read error, then it's quite easy
 to guess which pair of the array is wrong...

Been working that way for more than 2 decades Ed. :)  Note that RAID1
has that 1 for a reason.  It was the first RAID level.  It was in
production for many many years before parity RAID hit the market.  It is
the most well understood of all RAID levels, and the simplest.

-- 
Stan