On Jun 8, 2010, at 9:10 AM, Frank Bonnet wrote: > On 06/08/2010 02:41 PM, Timo Sirainen wrote: >> On ti, 2010-06-08 at 14:20 +0200, Philippe Chevalier wrote: >> >>> dovecot: IMAP(<user>@domain.org): Corrupted transaction log file >>> /home/<user>/Mail/Maildir/INBOX/dovecot.index.log seq 13: record size too >>> small +(type=0x0, offset=5560, size=0) (sync_offset=5652) >> .. >>> After some digging, I "solved" this problem with mmap_disable = yes in >>> dovecot.conf. Index corruption doesn't seem to occur anymore. >>> >>> Is this normal? I thought this problem occured only on NFS filesystem >>> and eventually on old versions of ZFS. Hasn't this been fixed? >> >> Apparently it doesn't work perfectly..
I quit using mmap_disable around 7.1-STABLE and haven't had that bug since then. I'm running 8.0-R with Maildirs in a compressed ZFS dataset right now with no problems. That's pretty odd...I'm pretty sure it was in the implementation and had nothing to do with the ZFS version but I assume your datasets and pools are all updated to the latest version? >> >>> Is there an option in ZFS that would allow mmap calls without >>> corruption. Has it something to do with compression ? >> >> I've no idea about ZFS. You should check it out, it's rad! >> >>> Other problem, that I have been unable to solve so far, is that a lot of >>> entries show up in my logs about : >>> >>> dovecot: imap-login: net_disconnect() failed: Connection reset by peer >> >> This means close() failed with: >> >> [ECONNRESET] The underlying object was a stream socket that was >> shut down by the peer before all pending data was >> delivered. >> >> This is the first time I've heard of this happening.. I see this shows >> up the first time in FreeBSD 6.3 man pages. Hmm. I don't like it. I >> guess I could work around it, but I think I'll first go complain about >> it to FreeBSD people. >> > > I get the same error messages at FreeBSD 7.2 ( many of them ) > > Jun 08 15:01:24 IMAP(xxxxxxxx): Error: close(client out) failed: > Connection reset by peer I've seen this a FEW times. Like 3 in the last six months. seems to have gone away after updating to 1.2..though maybe I just haven't triggered it again.
