Re: dovecot is moving messages to spam

2016-09-28 Thread Webert de Souza Lima
Hello,

it worked just fine. thank your for your help.

After change this user's password and kicked him, I resent the e-mail and
it didn't move.
Surely he has some MUA set somewhere but he has no clue where.

Thank you for your time.

On Wed, Sep 28, 2016 at 11:13 AM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

> Hi,
>
> you could try doveadm who and doveadm kick. These might help you out.
>
> Aki
> > On September 28, 2016 at 4:39 PM Webert de Souza Lima <
> webert.b...@gmail.com> wrote:
> >
> >
> > Hi Konstantin,
> >
> > There is no sieve for the user (checked his dovecot mail directory, sieve
> > folder is empty and there is no sieve file) and the only global sieve
> > present is regarding the  X-Spam-Flag header, which is not the case.
> > There is no login happening for this user and this occur, for sure.
> >
> > The only thing I can imagine is some e-mail client as you said is holding
> > an old connection open, previously authenticated (before I disabled his
> > login) and moving the messages with some filter.
> > Looking at the logs, it surely looks like an e-mail client software.
> >
> > I'll take a deeper look into this.
> >
> > On Wed, Sep 28, 2016 at 10:30 AM, Konstantin Khomoutov <
> > flatw...@users.sourceforge.net> wrote:
> >
> > > On Wed, 28 Sep 2016 10:15:25 -0300
> > > Webert de Souza Lima <webert.b...@gmail.com> wrote:
> > >
> > > > is there any dovecot rule settings besides X-Spam-Flag header? Can it
> > > > move messages via IMAP?
> > > >
> > > > I have a message that is being moved to spam folder after delivered
> > > > in the INBOX but it has no X-Spam-Flag and it's not beeing done by
> > > > the user (I changed his password, suspended his account and made his
> > > > login impossible).
> > > >
> > > > This happens only when certain "FROM" address is present in the body,
> > > > like the following message (sent via telnet):
> > > [...]
> > > > Sep 28 13:08:00 lmtp(my.user@my.domain): Info:
> OOKlA3rA61dNbwAAkzG9Ng:
> > > > sieve: msgid=unspecified: stored mail into mailbox 'INBOX'
> > > >
> > > > Sep 28 13:08:01 imap(my.user@my.domain): Info: copy from INBOX:
> > > > box=INBOX.Spam, uid=154, msgid=, size=340, subject=Test
> > > >
> > > > Sep 28 13:08:01 imap(my.user@my.domain): Info: expunge: box=INBOX,
> > > > uid=18147, msgid=, size=340, subject=Test
> > > [...]
> > >
> > > Are you sure there's no Sieve script active for this user?
> > > (Note that there also could be a global Sieve script or scripts which
> > > are executed before/after those of a user.)
> > >
> > > And have you really verified nothing logs into the server for sure
> > > using that user's credentials (such as a Thunderbird instance with mail
> > > filters enabled)?  Another thing to check is that this user's INBOX
> > > folder is not shared with someone else (if at all possible).
> > >
>


Re: doveadm backup fails (compromised single attachment storage)

2016-09-30 Thread Webert de Souza Lima
by SAS I meant SIAS (Single Instance Attachment Storage).

On Thu, Sep 29, 2016 at 9:33 AM Webert de Souza Lima <webert.b...@gmail.com>
wrote:

> Hi,
>
> A couple of months ago I had a problem with Single Attachment Storage
> after infrastructure migration;
>
> All mailboxes were rsynced to another filesystem, and that may have broken
> Single Attachment Storage. Many, many (if not all) mailboxes show the below
> logs on dovecot:
>
> imap(f...@bar.com): Error:
> read(attachments-connector(zlib(/dovecotdir/mail/
> bar.com/foo/mailboxes/INBOX/dbox-Mails/u.26426))) failed:
> read(/dovecotdir/attach/
> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426[base64:19
> <http://bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426%5Bbase64:19>
> b/l]) failed: open(/dovecotdir/attach/
> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426)
> failed: No such file or directory
>
>
> When that happens, the MUA keeps syncing forever.
>
> Now, I need to migrate all mailboxes (again) to another dovecot instance
> (with no SAS), which works perfectly for new users but when I try to
> migrate users from my current dovecot server for this new server, I get
> such errors again, and I can't migrate:
>
> 2016-09-29T12:20:50.995934059Z Sep 29 12:20:50 dsync-server(f...@bar.com):
> Error: dsync(cf7d091311eb):
> read(attachments-connector(zlib(/dovecotdir/mdbox/bar.com/foo/storage/m.1)))
> failed: read(/dovecotdir/attach/
> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36[base64:18
> <http://bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36%5Bbase64:18>
> b/l]) failed: open(/dovecotdir/attach/
> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36)
> failed: No such file or directory (last sent=mail, last recv=mail_request
> (EOL))
>
> Is there a way to fix the attachments problem? (I know I can't recover
> such files, that's Ok)
> Is there a way to migrate (dsync backup) ignoring such problems?
>
> Thanks in advance.
>


fix SIS attachment errors

2016-10-05 Thread Webert de Souza Lima
Hi, I've sent some e-mails about this before but since there was no answers
I'll write it differently, with different information.

I'm using SIS (Single Instance Attachment Storage).
For any reason that is not relevant now, many attachments are missing and
the messages can't be fetched:

Error:
read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed:
open(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36)
failed: No such file or directory

in this specific case, the /dovecot/attach/bar.example/23/ae/ director
doesn't exist.
In other cases, just one file is missing so I would assume the hardlink
could be recreated and it would work.

If I create the missing file (with touch or whatever), I get the following
errors:
Error:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is smaller than expected (0 < 483065)
Error:
read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is smaller than expected (0 < 483065)
Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error

If I try to fill the file with the amount of bytes it complains about with
the following command:

$ dd if=/dev/zero
of=/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36
bs=1 count=483065

then I get the following error:

Error:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0)
Error:
read(attachments-connector(zlib(/srv/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(//dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0)
Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error

Based on this I have a few questions:
1. Is there a way, or a tool to scan all mailboxes to get all the messages
that have compromised attachments?

2. is there a way to "fix" the missing files (even if creating fake files
or removing the attachments information from the messages)

3. What I need is to migrate these boxes using doveadm backup/sync, which
fails when these errors occur. Is is possible to ignore them or is there
another tool that would do it?

Thank you.

Webert Lima
Belo Horizonte, Brasil


Director keeping IMAP connections alive

2016-09-21 Thread Webert de Souza Lima
Hello,

I have a 2 director - 2 dovecot set up in a cluster.

>From time to time I notice high usage of RAM by dovecot process, and
analyzing with doveadm who,
I see many users with dozens, even hundreds of PIDs.

Inspecting those PIDs I see each one of them is an IMAP connection, coming
from either director process, and ESTABILISHED.

A deeper analysis shows me that there ltos of connections from the same
users to BOTH dovecot instances, but as I am using director, this shouldn't
happen, right? Ok. The thing is, one of the dovecot instances have only old
connections (like 3 days old) and the other dovecot have some old and some
newer connections.

So, director is redirecting recent connections to the right dovecot, as
expected, but it is keeping many of old and unused connections open,
consuming resources.

output of doveconf -n from dovecot:
http://pastebin.com/trMEjeAs

output of doveconf -n from director:
http://pastebin.com/EUpHYMKY

Thanks.


Re: Director keeping IMAP connections alive

2016-09-23 Thread Webert de Souza Lima
Such connections do not exist on the front-end that connects to director
hosts, so it's something between director and dovecot only.

On Wed, Sep 21, 2016 at 1:16 PM, Webert de Souza Lima <webert.b...@gmail.com
> wrote:

> Hello,
>
> I have a 2 director - 2 dovecot set up in a cluster.
>
> From time to time I notice high usage of RAM by dovecot process, and
> analyzing with doveadm who,
> I see many users with dozens, even hundreds of PIDs.
>
> Inspecting those PIDs I see each one of them is an IMAP connection, coming
> from either director process, and ESTABILISHED.
>
> A deeper analysis shows me that there ltos of connections from the same
> users to BOTH dovecot instances, but as I am using director, this shouldn't
> happen, right? Ok. The thing is, one of the dovecot instances have only old
> connections (like 3 days old) and the other dovecot have some old and some
> newer connections.
>
> So, director is redirecting recent connections to the right dovecot, as
> expected, but it is keeping many of old and unused connections open,
> consuming resources.
>
> output of doveconf -n from dovecot:
> http://pastebin.com/trMEjeAs
>
> output of doveconf -n from director:
> http://pastebin.com/EUpHYMKY
>
> Thanks.
>


dovecot is moving messages to spam

2016-09-28 Thread Webert de Souza Lima
Hi,

is there any dovecot rule settings besides X-Spam-Flag header? Can it move
messages via IMAP?

I have a message that is being moved to spam folder after delivered in the
INBOX but it has no X-Spam-Flag and it's not beeing done by the user (I
changed his password, suspended his account and made his login impossible).

This happens only when certain "FROM" address is present in the body, like
the following message (sent via telnet):

From:
Subject: Test

teste
.


Dovecot logs:

Sep 28 13:08:00 lmtp(my.user@my.domain): Info: OOKlA3rA61dNbwAAkzG9Ng:
sieve: msgid=unspecified: stored mail into mailbox 'INBOX'

Sep 28 13:08:01 imap(my.user@my.domain): Info: copy from INBOX:
box=INBOX.Spam, uid=154, msgid=, size=340, subject=Test

Sep 28 13:08:01 imap(my.user@my.domain): Info: expunge: box=INBOX,
uid=18147, msgid=, size=340, subject=Test

Thanks in advance


Re: dovecot is moving messages to spam

2016-09-28 Thread Webert de Souza Lima
Hi Konstantin,

There is no sieve for the user (checked his dovecot mail directory, sieve
folder is empty and there is no sieve file) and the only global sieve
present is regarding the  X-Spam-Flag header, which is not the case.
There is no login happening for this user and this occur, for sure.

The only thing I can imagine is some e-mail client as you said is holding
an old connection open, previously authenticated (before I disabled his
login) and moving the messages with some filter.
Looking at the logs, it surely looks like an e-mail client software.

I'll take a deeper look into this.

On Wed, Sep 28, 2016 at 10:30 AM, Konstantin Khomoutov <
flatw...@users.sourceforge.net> wrote:

> On Wed, 28 Sep 2016 10:15:25 -0300
> Webert de Souza Lima <webert.b...@gmail.com> wrote:
>
> > is there any dovecot rule settings besides X-Spam-Flag header? Can it
> > move messages via IMAP?
> >
> > I have a message that is being moved to spam folder after delivered
> > in the INBOX but it has no X-Spam-Flag and it's not beeing done by
> > the user (I changed his password, suspended his account and made his
> > login impossible).
> >
> > This happens only when certain "FROM" address is present in the body,
> > like the following message (sent via telnet):
> [...]
> > Sep 28 13:08:00 lmtp(my.user@my.domain): Info: OOKlA3rA61dNbwAAkzG9Ng:
> > sieve: msgid=unspecified: stored mail into mailbox 'INBOX'
> >
> > Sep 28 13:08:01 imap(my.user@my.domain): Info: copy from INBOX:
> > box=INBOX.Spam, uid=154, msgid=, size=340, subject=Test
> >
> > Sep 28 13:08:01 imap(my.user@my.domain): Info: expunge: box=INBOX,
> > uid=18147, msgid=, size=340, subject=Test
> [...]
>
> Are you sure there's no Sieve script active for this user?
> (Note that there also could be a global Sieve script or scripts which
> are executed before/after those of a user.)
>
> And have you really verified nothing logs into the server for sure
> using that user's credentials (such as a Thunderbird instance with mail
> filters enabled)?  Another thing to check is that this user's INBOX
> folder is not shared with someone else (if at all possible).
>


Re: doveadm backup fails (compromised single attachment storage)

2016-10-03 Thread Webert de Souza Lima
Since no one seems to know if mailboxes can be "fixed", is possible to run
dsync backup ignoring errors? There is no such documentation.
When the describe errors occur, sync is interrupted.

On Fri, Sep 30, 2016 at 10:18 AM Webert de Souza Lima <webert.b...@gmail.com>
wrote:

> by SAS I meant SIAS (Single Instance Attachment Storage).
>
> On Thu, Sep 29, 2016 at 9:33 AM Webert de Souza Lima <
> webert.b...@gmail.com> wrote:
>
>> Hi,
>>
>> A couple of months ago I had a problem with Single Attachment Storage
>> after infrastructure migration;
>>
>> All mailboxes were rsynced to another filesystem, and that may have
>> broken Single Attachment Storage. Many, many (if not all) mailboxes show
>> the below logs on dovecot:
>>
>> imap(f...@bar.com): Error:
>> read(attachments-connector(zlib(/dovecotdir/mail/
>> bar.com/foo/mailboxes/INBOX/dbox-Mails/u.26426))) failed:
>> read(/dovecotdir/attach/
>> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426[base64:19
>> <http://bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426%5Bbase64:19>
>> b/l]) failed: open(/dovecotdir/attach/
>> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426)
>> failed: No such file or directory
>>
>>
>> When that happens, the MUA keeps syncing forever.
>>
>> Now, I need to migrate all mailboxes (again) to another dovecot instance
>> (with no SAS), which works perfectly for new users but when I try to
>> migrate users from my current dovecot server for this new server, I get
>> such errors again, and I can't migrate:
>>
>> 2016-09-29T12:20:50.995934059Z Sep 29 12:20:50 dsync-server(f...@bar.com):
>> Error: dsync(cf7d091311eb):
>> read(attachments-connector(zlib(/dovecotdir/mdbox/
>> bar.com/foo/storage/m.1))) failed: read(/dovecotdir/attach/
>> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36[base64:18
>> <http://bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36%5Bbase64:18>
>> b/l]) failed: open(/dovecotdir/attach/
>> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36)
>> failed: No such file or directory (last sent=mail, last recv=mail_request
>> (EOL))
>>
>> Is there a way to fix the attachments problem? (I know I can't recover
>> such files, that's Ok)
>> Is there a way to migrate (dsync backup) ignoring such problems?
>>
>> Thanks in advance.
>>
>


doveadm backup fails (compromised single attachment storage)

2016-09-29 Thread Webert de Souza Lima
Hi,

A couple of months ago I had a problem with Single Attachment Storage after
infrastructure migration;

All mailboxes were rsynced to another filesystem, and that may have broken
Single Attachment Storage. Many, many (if not all) mailboxes show the below
logs on dovecot:

imap(f...@bar.com): Error: read(attachments-connector(zlib(/dovecotdir/mail/
bar.com/foo/mailboxes/INBOX/dbox-Mails/u.26426))) failed:
read(/dovecotdir/attach/
bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426[base64:19
b/l]) failed: open(/dovecotdir/attach/
bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c1495753619331bd36-43cea6154b3275573b089331bd36-26426)
failed: No such file or directory


When that happens, the MUA keeps syncing forever.

Now, I need to migrate all mailboxes (again) to another dovecot instance
(with no SAS), which works perfectly for new users but when I try to
migrate users from my current dovecot server for this new server, I get
such errors again, and I can't migrate:

2016-09-29T12:20:50.995934059Z Sep 29 12:20:50 dsync-server(f...@bar.com):
Error: dsync(cf7d091311eb):
read(attachments-connector(zlib(/dovecotdir/mdbox/bar.com/foo/storage/m.1)))
failed: read(/dovecotdir/attach/
bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36[base64:18
b/l]) failed: open(/dovecotdir/attach/
bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba179331bd36)
failed: No such file or directory (last sent=mail, last recv=mail_request
(EOL))

Is there a way to fix the attachments problem? (I know I can't recover such
files, that's Ok)
Is there a way to migrate (dsync backup) ignoring such problems?

Thanks in advance.


Re: Dovecot does not close connections

2016-10-14 Thread Webert de Souza Lima
This happens to me too. On my case, connections are ESTABILISHED.

On Fri, Oct 14, 2016 at 9:09 AM Steffen Kaiser <
skdove...@smail.inf.fh-brs.de> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Fri, 14 Oct 2016, Benedikt Carda wrote:
>
> > I am running into this error:
> > /Maximum number of connections from user+IP exceeded
> > (mail_max_userip_connections=10)/
> >
> > The suggested solution in hundreds of support requests on this mailing
> > list and throughout the internet is to increase the number of maximum
> > userip connections. But this is not curing the problem, it is just
> > postponing it to the moment when the new limit is reached.
> >
> > When i type:
> > /doveadm who//
> > /
> >
> > I can see that some accounts have several pids running:
> > /someaccount   10 imap  (25396 25391 25386 25381 25374 7822 7817
> > 5559 5543 5531) (xxx.xxx.xxx.xxx)/
> >
> > Now when I check these pids with
> > /ps aux/
> >
> > I find out that the oldest pid (5531) has a lifetime of already over 12
> > hours. Anyway I know that the clients that initiated the connections are
> > not connected anymore, so there is no way that there is a valid reason
> > why this connection should still be open.
>
> What's the state of the connection ?
>
>
> - --
> Steffen Kaiser
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEVAwUBWADK13z1H7kL/d9rAQKw6gf/SbLMdf988i3u5arben3YseszjkOfMLqr
> bRzuBa3wopFC7h456qORiSUqs14YWK7IvLkC5Ke81pdz3beDPFaYrjxvIjldn0KJ
> YZzsAp7Nc04OzdcC1JZlZ96zjL85AfiokGVvjhCuqVNV0S1R9dy5wJLyouvdnNym
> gLO2twykuEajJugcnqSfMj0QWhMFO+quYAOEUNeRpf4fDvPPNo11Y89aDtwCrZUp
> OMEbDIMa92CnNRARkiqRINJmqt3v9ou3DEETnoyj8qGglO/zU+uAOE9BeoihPF4l
> GIKMJ4agva1p1Un53RBsnpsXxVCljMcvt++M5g/vs+svYqulRpZeXQ==
> =O6DY
> -END PGP SIGNATURE-
>


Re: fix SIS attachment errors

2016-10-13 Thread Webert de Souza Lima
To whom it may interest;

With the help of Aki Tuomi I've found a way to remove such errors and move
forward, in a way that could be automated.
As this might be a problem to others and there seems to be no discussion
about it, i'll share it with you.

What I did, essentially, was to write a shell script that do the following,
per user:

- read all the mailboxes with `doveadm fetch -u $username text all` and
redirect errors to a file
- identify all missing attachments' paths from the file created previously
and try to create a hardlink to it. Any file with the same hash (before
`-`) is good.
- identify all mailboxes and uids from messages there are still broken (the
same error file created before should have this information) and fetch
them, and save them elsewhere.
- after fetching and saving, expunge such messages.
- use doveadm save to put the messages back. They'll be without the
attachments but also without any errors.

There are some gotchas to do the above, and to automate that, so I'll be
happy to help if anyone needs.

Thank you.

On Wed, Oct 5, 2016 at 3:59 PM Webert de Souza Lima <webert.b...@gmail.com>
wrote:

Hi, I've sent some e-mails about this before but since there was no answers
I'll write it differently, with different information.

I'm using SIS (Single Instance Attachment Storage).
For any reason that is not relevant now, many attachments are missing and
the messages can't be fetched:

Error:
read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed:
open(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36)
failed: No such file or directory

in this specific case, the /dovecot/attach/bar.example/23/ae/ director
doesn't exist.
In other cases, just one file is missing so I would assume the hardlink
could be recreated and it would work.

If I create the missing file (with touch or whatever), I get the following
errors:
Error:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is smaller than expected (0 < 483065)
Error:
read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is smaller than expected (0 < 483065)
Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error

If I try to fill the file with the amount of bytes it complains about with
the following command:

$ dd if=/dev/zero
of=/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36
bs=1 count=483065

then I get the following error:

Error:
read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0)
Error:
read(attachments-connector(zlib(/srv/dovecot/mdbox/bar.example/foo/storage/m.1)))
failed:
read(//dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f9331bd36[base64:19
b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0)
Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error

Based on this I have a few questions:
1. Is there a way, or a tool to scan all mailboxes to get all the messages
that have compromised attachments?

2. is there a way to "fix" the missing files (even if creating fake files
or removing the attachments information from the messages)

3. What I need is to migrate these boxes using doveadm backup/sync, which
fails when these errors occur. Is is possible to ignore them or is there
another tool that would do it?

Thank you.

Webert Lima
Belo Horizonte, Brasil


Feature Request - Director Balance

2017-04-20 Thread Webert de Souza Lima
Hi,

often I run into the situation where a dovecot server goes down for
maintenance, and all users get concentrated in the remaining dovecot server
(considering I have 2 dovecot servers only).

When that dovecot server comes back online, director server will send new
users to it, but the dovecot server that was up all the time will still
have tons of clients mapped to it.

I suggest the director servers to always try to balance load between
servers, in the way:

 - if a server has several more connections than other, mark it to
re-balance
 - when a user connected to this loaded server disconnects, map it to
another server (that is per definition not the same server) immediately.

that way it would gracefully re-balance, not killing existing connections,
just waiting for them to finish.


Thank you for your time.

Webert Lima
MAV Tecnologia
Belo Horizonte, Brasil.


Re: Dovecot not listening when testing connection

2017-04-20 Thread Webert de Souza Lima
You won't have that "* OK [CAPA]" message by doing telnet on port 993,
as this is a secure port and the connection is encrypted.
Either you need to use something as openssl or gnutls to test it that way,
or telnet to imap port 143 (not encrypted).

On Thu, Apr 20, 2017 at 11:33 AM, Alvaro Lacerda 
wrote:

> Hi,
>
> This is my environment:
>
> SMTP: Exim 4.89 with Mailscanner 5.0.3
>
> IMAP: Dovecot 2.2.10
>
> At the moment I'm just trying to test out my Dovecot to check if it's
> listening on port 993.
>
> *netstat -tuln:* Shows that my machine is listening on ports 143 and 993.
>
> *telnet localhost 993: *This is my issue, I get the following message:
>
> # telnet localhost 993
> Trying ::1...
> Connected to localhost.
> Escape character is '^]'.
> Connection closed by foreign host.
>
> According to the wiki.dovecot test installation page I should be getting
> this instead:
>
> https://wiki.dovecot.org/TestInstallation
>
>
> Trying 127.0.0.1...
> Connected to localhost.
> Escape character is '^]'.
> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
> STARTTLS AUTH=PLAIN] Dovecot ready.
>
>
> When I run doveconf this is what I get:
>
> # doveconf protocols listen
> protocols = imap pop3 lmtp
> listen = *, ::
>
> Does anyone have an idea of what I'm missing here? Thanks.
>
>
>
>
>
> --
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of this
> information by persons or entities other than the intended recipient is
> prohibited. If you receive this in error, please contact the sender and
> delete this material from any computer.
>
> Cantella does not permit execution of trades requested by email.  Please
> call to ensure prompt execution of orders, as we are not responsible for
> orders transmitted through email.
>
> Investing involves risk and you may incur a profit or a loss. Please
> carefully consider investment objectives, risks, charges, and expenses
> before investing.  Cantella & Co., Inc. does not provide legal or tax
> advice. For legal or tax advice, please seek the services of a qualified
> professional.  The performance data featured represents past performance,
> which is no guarantee of future results.  Mutual funds and UITs are sold by
> prospectus only. Please carefully consider the fund's investment objective,
> risks, charges and expenses applicable to a continued investment in the
> fund before investing. For this and other information, call or write for a
> free prospectus, or view one online. Read it carefully before you invest or
> send money.
> Fixed income is subject to availability and change in price.  Bonds are
> subject to market and interest rate risk if sold prior to maturity.
> Interest rate increases can cause the price of a debt security to
> decrease.  Interest income may be subject to federal, state, local, and/or
> alternative minimum tax.
>
> In accordance with industry regulations, all messages are retained and are
> subject to monitoring.
> This message has been scanned for viruses and dangerous content and is
> believed to be clean.
> Securities offered through Cantella & Co., Inc., Member FINRA/SIPC. Home
> Office: 28 State St 40th Floor, Boston, MA 02109
> Telephone: (800)652-8358
>


FTS Lucene Search by Word Parts

2017-08-16 Thread Webert de Souza Lima
Hello,

as the dovecot documentation is only trivial for FTS Search options, I
can't find information on advanced options.

I have enabled it but it seems I can only search for entire words, i.e. the
word Dovecot can be found by searching "dovecot", but not "dove".

Is it possible to achieve this using FTS Lucene?

I know that using Solr that achievable setting n-gram.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


Re: Feature Request - Director Balance

2017-04-24 Thread Webert de Souza Lima
Shrinking director_user_expire might be a workaround but not as good as a
solution, as also the user can end up mapped to the same server again.
Director flush is both manual and aggressive, so not a good solution too.

The possibility to move users between backends without killing existing
connections is a good solution, yes! It can be scripted. =]

What I suggested was more automated, but that can be left for a future
future. If you have a command to be manually issued like:  "doveadm
director rebalance" it would be great.

Thanks for your feedback.

Att,

Webert de Souza Lima
MAV Tecnologia.


On Fri, Apr 21, 2017 at 4:52 AM, Timo Sirainen <t...@iki.fi> wrote:

> On 20 Apr 2017, at 17.35, Webert de Souza Lima <webert.b...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > often I run into the situation where a dovecot server goes down for
> > maintenance, and all users get concentrated in the remaining dovecot
> server
> > (considering I have 2 dovecot servers only).
> >
> > When that dovecot server comes back online, director server will send new
> > users to it, but the dovecot server that was up all the time will still
> > have tons of clients mapped to it.
> >
> > I suggest the director servers to always try to balance load between
> > servers, in the way:
> >
> > - if a server has several more connections than other, mark it to
> > re-balance
> > - when a user connected to this loaded server disconnects, map it to
> > another server (that is per definition not the same server) immediately.
> >
> > that way it would gracefully re-balance, not killing existing
> connections,
> > just waiting for them to finish.
>
> You could effectively do this by shrinking the director_user_expire time.
> But if it's too low, it causes director to be a bit more inefficient when
> assigning users to backends. Also if backends are doing any background work
> (e.g. full text search indexing) director might move the user away too
> early. But setting it to e.g. 5 minutes would likely help a lot.
>
> There's of course also the doveadm director flush, which can be used to
> move users between backends, but that requires killing the connections for
> now. I've some future plans to make it possible to move connections between
> backends without disconnecting the IMAP client.
>


Re: Inconsistency in map index

2017-08-18 Thread Webert de Souza Lima
On Fri, Aug 18, 2017 at 1:30 PM, Timo Sirainen  wrote:

>
> That would work. Also as a different workaround you could just rm
> storage/dovecot.map.index* and doveadm force-resync -u user@domain '*'.



Thank you Timo, that did the trick without the need of a sudden upgrade.
Mailbox was fixed :)


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*

>
>


Re: Inconsistency in map index

2017-08-18 Thread Webert de Souza Lima
Oh, so that's likely a bug.
I was thinking it would require manual intervention to fix.

Great, I'll do an upgrade ASAP. Praised be Docker.

Thank you very much.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*

On Fri, Aug 18, 2017 at 9:03 AM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

>
>
> On 18.08.2017 14:55, Webert de Souza Lima wrote:
> > Hello,
> >
> > The following errors are constantly popping up for 2 accounts. I can't
> get
> > it fixed,
> > I did doveadm backup to another account, the same happens in the new
> > account.
> > I did doveadm force-resync, the problem persists.
> >
> > I'm using dovecot 2.2.
> >
> > 2017-08-18T11:46:12.472821881Z Aug 18 11:46:12 lmtp(
> ramon.lace...@alliar.com):
> > Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
> > Inconsistency in map index (647,6288 != 647,28333584)
> >
> > 2017-08-18T11:46:12.651002372Z Aug 18 11:46:12 lmtp(
> ramon.lace...@alliar.com):
> > Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
> > Inconsistency in map index (647,6288 != 647,28333708)
> >
> > 2017-08-18T11:46:12.651059432Z Aug 18 11:46:12 lmtp(
> ramon.lace...@alliar.com):
> > Warning: fscking index file /srv/dovecot2/index/
> > alliar.com/ramon.lacerda/storage/dovecot.map.index
> >
> > 2017-08-18T11:46:12.764926940Z Aug 18 11:46:12 lmtp(
> ramon.lace...@alliar.com):
> > Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
> > rebuilding indexes
> >
> >
> > Regards,
> >
> > Webert Lima
> > DevOps Engineer at MAV Tecnologia
> > *Belo Horizonte - Brasil*
>
> Hi!
>
> This is fixed in next release (2.2.32) with
> https://github.com/dovecot/core/commit/c8be394
>
> Aki Tuomi
>


Re: Inconsistency in map index

2017-08-18 Thread Webert de Souza Lima
On Fri, Aug 18, 2017 at 9:03 AM, Aki Tuomi  wrote:

> This is fixed in next release (2.2.32) with
> https://github.com/dovecot/core/commit/c8be394
>
> Aki Tuomi
>

As this is still a release candidate, I'm thinking of running an isolated
instance of this version, and do doveadm force-resync just to fix just this
user's mailbox.
What do you think?

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


Inconsistency in map index

2017-08-18 Thread Webert de Souza Lima
Hello,

The following errors are constantly popping up for 2 accounts. I can't get
it fixed,
I did doveadm backup to another account, the same happens in the new
account.
I did doveadm force-resync, the problem persists.

I'm using dovecot 2.2.

2017-08-18T11:46:12.472821881Z Aug 18 11:46:12 lmtp(ramon.lace...@alliar.com):
Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
Inconsistency in map index (647,6288 != 647,28333584)

2017-08-18T11:46:12.651002372Z Aug 18 11:46:12 lmtp(ramon.lace...@alliar.com):
Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
Inconsistency in map index (647,6288 != 647,28333708)

2017-08-18T11:46:12.651059432Z Aug 18 11:46:12 lmtp(ramon.lace...@alliar.com):
Warning: fscking index file /srv/dovecot2/index/
alliar.com/ramon.lacerda/storage/dovecot.map.index

2017-08-18T11:46:12.764926940Z Aug 18 11:46:12 lmtp(ramon.lace...@alliar.com):
Warning: mdbox /srv/dovecot2/mail/alliar.com/ramon.lacerda/storage:
rebuilding indexes


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


Re: Recommended tool for migrating IMAP servers

2017-12-04 Thread Webert de Souza Lima
On Mon, Dec 4, 2017 at 9:16 AM, Sami Ketola  wrote:

>
> With every other tool you will face end users needing to  invalidate their
> local caches and
> redownloading all headers if not also all mail bodies.
>
> Sami
>
>
I don't think so. Been using imapsync for large scale migrations from
external servers to our dovecot setup. Users don't even see it when the key
is switched (DNS changes).
Go for it.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*



Re: Recommended tool for migrating IMAP servers

2017-12-05 Thread Webert de Souza Lima
On Tue, Dec 5, 2017 at 4:16 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:

> On Mon, Dec 4, 2017 at 9:16 AM, Sami Ketola <sami.ket...@dovecot.fi>
>  wrote:
>
>>
>> With every other tool you will face end users needing to  invalidate
>> their local caches and
>> redownloading all headers if not also all mail bodies.
>>
>> Sami
>>
>> > On 4 Dec 2017, at 19.59, Webert de Souza Lima <webert.b...@gmail.com>
> wrote:
> >
> > I don't think so. Been using imapsync for large scale migrations from
> > external servers to our dovecot setup. Users don't even see it when the
> key
> > is switched (DNS changes).
> > Go for it.
>
> You are wrong. There is no way to assign IMAP UID:s over IMAP protocol. It
> simply does not support it.
> With imapsync there is absolutely no way to preserve them and you will
> face problems with IMAP UID:s
> not matching the cached mail anymore.
>
> Trust us. We have run multiple migrations at scale of 10+ million users.
>
> Sami


Sorry, I might be wrong about cache invalidation, indeed. What I'm sure is
that users will hardly notice any server change.
We've never had user complaints about mailboxes resyncing or anything like
that after imapsync'ing to a new server, that's why I'd recommend using
imapsync with no worries.

It's easy enough to test this on a single user account first and see how is
imap client's behavior after the DNS change.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


Re: recover missing messages - files still present in storage

2017-12-07 Thread Webert de Souza Lima
On Thu, Dec 7, 2017 at 3:05 PM, Aki Tuomi  wrote:

> Have you attempted doveadm force-resync -u SUPPRESSED_VICTIM "*"?
>
>
Hello Aki,

yes I did that, but I didn't remove the map files first (I don't know if
that's required).
I can do it again if needed.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


recover missing messages - files still present in storage

2017-12-07 Thread Webert de Souza Lima
Hi,

I have a user account that had almost 20GB of emails and now they're
missing.
Only a few are available trough IMAP or doveadm.

I can see in /path/to/mailbox/storage that thousands of "m." files are
still there, summing up 19GB of files.

doveconf -n http://termbin.com/7lgc

I have tried:
accessing via IMAP
accessing via doveadm search/fetch/mailbox status
doveadm index
doveadm force-resync

doveadm dump:

-- INDEX:
/srv/dovecot2/index/DOMAIN_SUPRESSED/ACCOUNT_SUPRESSED/storage/dovecot.map.index
version .. = 7.3
base header size . = 120
header size .. = 176
record size .. = 20
compat flags . = 1
index id . = 1500105889 (2017-07-15 05:04:49)
flags  = 0
uid validity . = 1500105889 (2017-07-15 05:04:49)
next uid . = 26514
messages count ... = 26513
seen messages count .. = 0
deleted messages count ... = 0
first recent uid . = 1
first unseen uid lowwater  = 1
first deleted uid lowwater = 26446
log file seq . = 47
log file tail offset . = 15656
log file head offset . = 15656
log2 rotate time . = 1512558922 (2017-12-06 09:15:22)
last temp file scan .. = 0 (1969-12-31 21:00:00)
day stamp  = 1512604800 (2017-12-06 22:00:00)
day first uid[0] . = 26450
day first uid[1] . = 26395
day first uid[2] . = 26328
day first uid[3] . = 26252
day first uid[4] . = 26234
day first uid[5] . = 26216
day first uid[6] . = 26135
day first uid[7] . = 26055
-- Extension 0 --
name  = map
hdr_size  = 8
reset_id  = 0
record_offset = 8
record_size . = 12
record_align  = 4
header  = 61110100
-- Extension 1 --
name  = ref
hdr_size  = 0
reset_id  = 0
record_offset = 6
record_size . = 2
record_align  = 2
-- Keywords --

-- CACHE:
/srv/dovecot2/index/DOMAIN_SUPRESSED/ACCOUNT_SUPRESSED/storage/dovecot.map.index.cache
cache is unusable


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*


Re: recover missing messages - files still present in storage

2017-12-11 Thread Webert de Souza Lima
doveadm force-resync worked after removing the dovecot.map.index files.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


Re: Do we really need Solr commit as cronjob?

2017-12-06 Thread Webert de Souza Lima
I'm interested in knowing this too.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*

On Thu, Nov 30, 2017 at 10:07 PM, Gao  wrote:

> I am testing Solr FTS on dovecot. Read online that some suggested to run
> cronjob commit every minute, and optimize once a day.
>
> I am using Solr 7.1.0 and I see some configurations:
> In /etc/default/solr.in.sh:
> #SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=3000"
> #SOLR_OPTS="$SOLR_OPTS -Dsolr.autoCommit.maxTime=6"
>
> Also in solrconfig.xml:
> 
> ${solr.autoCommit.maxTime:15000}
>   false
> 
>
>
> ${solr.autoSoftCommit.maxTime:-1}
> 
>
> So my question is do I still need run cronjob for commit?
>
> Do I need uncomment these lines in the solr.in.sh? Does my solrconfig.xml
> overwrite the setting in solr.in.sh?
>
> Thanks for help.
>


Re: install dovecot 2.2.35 debian jessie

2018-04-27 Thread Webert de Souza Lima
Hey Aki Tuomi, how are you doing?

I have tried many ways for getting 2.2.35 pre-built installed via 'apt-get
install' in Debian Jessie and Stretch using the official repos.
The reason I prefer to install pre-built instead of compiling it is because
the I run it on dockers, so it's a lot easier and automated to just apt-get
install it.

I was using 2.2.31 devel in Debian Jessie, I wanted to move to stable, but
it will only install 2.2.27 or  2.2.36 alpha from the repos.

2.2.34 is the default stable using stretch + stretch backport repos, it
seems OK to me. I have upgraded all my instances.

I will take the opportunity to ask you a question: would you recommend me
to update director instances too? They run on separate containers. I
haven't upgraded them and it all seems to be working just fine in
production.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Fri, Apr 27, 2018 at 3:10 AM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

> Hi!
>
> 2.2.35 is not unstable, but apparently it is in debian distribution, which
> is called 'unstable'.
>
> Aki
>
> On 27.04.2018 06:16, Webert de Souza Lima wrote:
>
> Got 2.2.34 running using debian strech image + strech-backports repos!
>
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
> On Thu, Apr 26, 2018 at 9:37 PM, Webert de Souza Lima <
> webert.b...@gmail.com> wrote:
>
>> Oh thank you Cedric, I hadn't check that. So 2.2.35 is unstable, huh?
>> I'll deploy 2.2.34 instead.
>>
>> Thank you!
>>
>>
>> Regards,
>>
>> Webert Lima
>> DevOps Engineer at MAV Tecnologia
>> *Belo Horizonte - Brasil*
>> *IRC NICK - WebertRLZ*
>>
>> On Thu, Apr 26, 2018 at 7:51 PM, Cedric M <cedric.mali...@gmail.com>
>> wrote:
>>
>>> Hi,
>>> did you check in unstable ?
>>> https://tracker.debian.org/pkg/dovecot
>>>
>>> 2018-04-26 16:43 GMT-04:00 Webert de Souza Lima <webert.b...@gmail.com>:
>>>
>>>> hmm I think I should use stretch instead of jessie, OR I should use a
>>>> stretch-backport repos, right?
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Webert Lima
>>>> DevOps Engineer at MAV Tecnologia
>>>> *Belo Horizonte - Brasil*
>>>> *IRC NICK - WebertRLZ*
>>>>
>>>> On Thu, Apr 26, 2018 at 5:39 PM, Webert de Souza Lima <
>>>> webert.b...@gmail.com> wrote:
>>>>
>>>>> Hi, I can't figure how to install latest stable dovecot version 2.2.35
>>>>> in Debian Jessie
>>>>>
>>>>> If I follow this guide <https://repo.dovecot.org/>, it ends up
>>>>> installing 2.3
>>>>> If I follow this guide
>>>>> <https://wiki.dovecot.org/PrebuiltBinaries#preview>, it ends up
>>>>> installing either 2.2.13 if I use "stable" or 2.2.36 alpha if I use 
>>>>> "jessie"
>>>>>
>>>>> I see that 2.2.35 seems to be missing here too:
>>>>> http://xi.dovecot.fi/debian/pool/jessie-auto/dovecot-2.2/
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Webert Lima
>>>>> DevOps Engineer at MAV Tecnologia
>>>>> *Belo Horizonte - Brasil*
>>>>> *IRC NICK - WebertRLZ*
>>>>>
>>>>
>>>>
>>>
>>
>
>


dovecot + cephfs - sdbox vs mdbox

2018-05-16 Thread Webert de Souza Lima
I'm sending this message to both dovecot and ceph-users ML so please don't
mind if something seems too obvious for you.

Hi,

I have a question for both dovecot and ceph lists and below I'll explain
what's going on.

Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when
using sdbox, a new file is stored for each email message.
When using mdbox, multiple messages are appended to a single file until it
reaches/passes the rotate limit.

I would like to understand better how the mdbox format impacts on IO
performance.
I think it's generally expected that fewer larger file translate to less IO
and more troughput when compared to more small files, but how does dovecot
handle that with mdbox?
If dovecot does flush data to storage upon each and every new email is
arrived and appended to the corresponding file, would that mean that it
generate the same ammount of IO as it would do with one file per message?
Also, if using mdbox many messages will be appended to a said file before a
new file is created. That should mean that a file descriptor is kept open
for sometime by dovecot process.
Using cephfs as backend, how would this impact cluster performance
regarding MDS caps and inodes cached when files from thousands of users are
opened and appended all over?

I would like to understand this better.

Why?
We are a small Business Email Hosting provider with bare metal, self hosted
systems, using dovecot for servicing mailboxes and cephfs for email storage.

We are currently working on dovecot and storage redesign to be in
production ASAP. The main objective is to serve more users with better
performance, high availability and scalability.
* high availability and load balancing is extremely important to us *

On our current model, we're using mdbox format with dovecot, having
dovecot's INDEXes stored in a replicated pool of SSDs, and messages stored
in a replicated pool of HDDs (under a Cache Tier with a pool of SSDs).
All using cephfs / filestore backend.

Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
(10.2.9-4).
 - ~25K users from a few thousands of domains per cluster
 - ~25TB of email data per cluster
 - ~70GB of dovecot INDEX [meta]data per cluster
 - ~100MB of cephfs metadata per cluster

Our goal is to build a single ceph cluster for storage that could expand in
capacity, be highly available and perform well enough. I know, that's what
everyone wants.

Cephfs is an important choise because:
 - there can be multiple mountpoints, thus multiple dovecot instances on
different hosts
 - the same storage backend is used for all dovecot instances
 - no need of sharding domains
 - dovecot is easily load balanced (with director sticking users to the
same dovecot backend)

On the upcoming upgrade we intent to:
 - upgrade ceph to 12.X (Luminous)
 - drop the SSD Cache Tier (because it's deprecated)
 - use bluestore engine

I was said on freenode/#dovecot that there are many cases where SDBOX would
perform better with NFS sharing.
In case of cephfs, at first, I wouldn't think that would be true because
more files == more generated IO, but thinking about what I said in the
beginning regarding sdbox vs mdbox that could be wrong.

Any thoughts will be highlt appreciated.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


Re: [ceph-users] dovecot + cephfs - sdbox vs mdbox

2018-05-16 Thread Webert de Souza Lima
Thanks Jack.

That's good to know. It is definitely something to consider.
In a distributed storage scenario we might build a dedicated pool for that
and tune the pool as more capacity or performance is needed.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Wed, May 16, 2018 at 4:45 PM Jack <c...@jack.fr.eu.org> wrote:

> On 05/16/2018 09:35 PM, Webert de Souza Lima wrote:
> > We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
> > backend.
> > We'll have to do some some work on how to simulate user traffic, for
> writes
> > and readings. That seems troublesome.
> I would appreciate seeing these results !
>
> > Thanks for the plugins recommendations. I'll take the change and ask you
> > how is the SIS status? We have used it in the past and we've had some
> > problems with it.
>
> I am using it since Dec 2016 with mdbox, with no issue at all (I am
> currently using Dovecot 2.2.27-3 from Debian Stretch)
> The only config I use is mail_attachment_dir, the rest lies as default
> (mail_attachment_min_size = 128k, mail_attachment_fs = sis posix,
> ail_attachment_hash = %{sha1})
> The backend storage is a local filesystem, and there is only one Dovecot
> instance
>
> >
> > Regards,
> >
> > Webert Lima
> > DevOps Engineer at MAV Tecnologia
> > *Belo Horizonte - Brasil*
> > *IRC NICK - WebertRLZ*
> >
> >
> > On Wed, May 16, 2018 at 4:19 PM Jack <c...@jack.fr.eu.org> wrote:
> >
> >> Hi,
> >>
> >> Many (most ?) filesystems does not store multiple files on the same
> block
> >>
> >> Thus, with sdbox, every single mail (you know, that kind of mail with 10
> >> lines in it) will eat an inode, and a block (4k here)
> >> mdbox is more compact on this way
> >>
> >> Another difference: sdbox removes the message, mdbox does not : a single
> >> metadata update is performed, which may be packed with others if many
> >> files are deleted at once
> >>
> >> That said, I do not have experience with dovecot + cephfs, nor have made
> >> tests for sdbox vs mdbox
> >>
> >> However, and this is a bit out of topic, I recommend you look at the
> >> following dovecot's features (if not already done), as they are awesome
> >> and will help you a lot:
> >> - Compression (classic, https://wiki.dovecot.org/Plugins/Zlib)
> >> - Single-Instance-Storage (aka sis, aka "attachment deduplication" :
> >> https://www.dovecot.org/list/dovecot/2013-December/094276.html)
> >>
> >> Regards,
> >> On 05/16/2018 08:37 PM, Webert de Souza Lima wrote:
> >>> I'm sending this message to both dovecot and ceph-users ML so please
> >> don't
> >>> mind if something seems too obvious for you.
> >>>
> >>> Hi,
> >>>
> >>> I have a question for both dovecot and ceph lists and below I'll
> explain
> >>> what's going on.
> >>>
> >>> Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox),
> >> when
> >>> using sdbox, a new file is stored for each email message.
> >>> When using mdbox, multiple messages are appended to a single file until
> >> it
> >>> reaches/passes the rotate limit.
> >>>
> >>> I would like to understand better how the mdbox format impacts on IO
> >>> performance.
> >>> I think it's generally expected that fewer larger file translate to
> less
> >> IO
> >>> and more troughput when compared to more small files, but how does
> >> dovecot
> >>> handle that with mdbox?
> >>> If dovecot does flush data to storage upon each and every new email is
> >>> arrived and appended to the corresponding file, would that mean that it
> >>> generate the same ammount of IO as it would do with one file per
> message?
> >>> Also, if using mdbox many messages will be appended to a said file
> >> before a
> >>> new file is created. That should mean that a file descriptor is kept
> open
> >>> for sometime by dovecot process.
> >>> Using cephfs as backend, how would this impact cluster performance
> >>> regarding MDS caps and inodes cached when files from thousands of users
> >> are
> >>> opened and appended all over?
> >>>
> >>> I would like to understand this better.
> >>>
> >>> Why?
> >>> We are a small Business Email Hosting provider with bare

Re: [ceph-users] dovecot + cephfs - sdbox vs mdbox

2018-05-16 Thread Webert de Souza Lima
Hello Jack,

yes, I imagine I'll have to do some work on tuning the block size on
cephfs. Thanks for the advise.
I knew that using mdbox, messages are not removed but I though that was
true in sdbox too. Thanks again.

We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
backend.
We'll have to do some some work on how to simulate user traffic, for writes
and readings. That seems troublesome.

Thanks for the plugins recommendations. I'll take the change and ask you
how is the SIS status? We have used it in the past and we've had some
problems with it.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Wed, May 16, 2018 at 4:19 PM Jack <c...@jack.fr.eu.org> wrote:

> Hi,
>
> Many (most ?) filesystems does not store multiple files on the same block
>
> Thus, with sdbox, every single mail (you know, that kind of mail with 10
> lines in it) will eat an inode, and a block (4k here)
> mdbox is more compact on this way
>
> Another difference: sdbox removes the message, mdbox does not : a single
> metadata update is performed, which may be packed with others if many
> files are deleted at once
>
> That said, I do not have experience with dovecot + cephfs, nor have made
> tests for sdbox vs mdbox
>
> However, and this is a bit out of topic, I recommend you look at the
> following dovecot's features (if not already done), as they are awesome
> and will help you a lot:
> - Compression (classic, https://wiki.dovecot.org/Plugins/Zlib)
> - Single-Instance-Storage (aka sis, aka "attachment deduplication" :
> https://www.dovecot.org/list/dovecot/2013-December/094276.html)
>
> Regards,
> On 05/16/2018 08:37 PM, Webert de Souza Lima wrote:
> > I'm sending this message to both dovecot and ceph-users ML so please
> don't
> > mind if something seems too obvious for you.
> >
> > Hi,
> >
> > I have a question for both dovecot and ceph lists and below I'll explain
> > what's going on.
> >
> > Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox),
> when
> > using sdbox, a new file is stored for each email message.
> > When using mdbox, multiple messages are appended to a single file until
> it
> > reaches/passes the rotate limit.
> >
> > I would like to understand better how the mdbox format impacts on IO
> > performance.
> > I think it's generally expected that fewer larger file translate to less
> IO
> > and more troughput when compared to more small files, but how does
> dovecot
> > handle that with mdbox?
> > If dovecot does flush data to storage upon each and every new email is
> > arrived and appended to the corresponding file, would that mean that it
> > generate the same ammount of IO as it would do with one file per message?
> > Also, if using mdbox many messages will be appended to a said file
> before a
> > new file is created. That should mean that a file descriptor is kept open
> > for sometime by dovecot process.
> > Using cephfs as backend, how would this impact cluster performance
> > regarding MDS caps and inodes cached when files from thousands of users
> are
> > opened and appended all over?
> >
> > I would like to understand this better.
> >
> > Why?
> > We are a small Business Email Hosting provider with bare metal, self
> hosted
> > systems, using dovecot for servicing mailboxes and cephfs for email
> storage.
> >
> > We are currently working on dovecot and storage redesign to be in
> > production ASAP. The main objective is to serve more users with better
> > performance, high availability and scalability.
> > * high availability and load balancing is extremely important to us *
> >
> > On our current model, we're using mdbox format with dovecot, having
> > dovecot's INDEXes stored in a replicated pool of SSDs, and messages
> stored
> > in a replicated pool of HDDs (under a Cache Tier with a pool of SSDs).
> > All using cephfs / filestore backend.
> >
> > Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
> > (10.2.9-4).
> >  - ~25K users from a few thousands of domains per cluster
> >  - ~25TB of email data per cluster
> >  - ~70GB of dovecot INDEX [meta]data per cluster
> >  - ~100MB of cephfs metadata per cluster
> >
> > Our goal is to build a single ceph cluster for storage that could expand
> in
> > capacity, be highly available and perform well enough. I know, that's
> what
> > everyone wants.
> >
> > Cephfs is an important choise because:
> >  - there can be multiple mountpoints, thus multiple dovecot instances on
> > di

Re: [ceph-users] dovecot + cephfs - sdbox vs mdbox

2018-05-16 Thread Webert de Souza Lima
Hello Danny,

I actually saw that thread and I was very excited about it. I thank you all
for that idea and all the effort being put in it.
I haven't yet tried to play around with your plugin but I intend to, and to
contribute back. I think when it's ready for production it will be
unbeatable.

I have watched your talk at Cephalocon (on YouTube). I'll see your slides,
maybe they'll give me more insights on our infrastructure architecture.

As you can see our business is still taking baby steps compared to Deutsche
Telekom's but we face infrastructure challenges everyday since ever.
As for now, I think we could still fit with cephfs, but we definitely need
some improvement.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Wed, May 16, 2018 at 4:42 PM Danny Al-Gaaf <danny.al-g...@bisect.de>
wrote:

> Hi,
>
> some time back we had similar discussions when we, as an email provider,
> discussed to move away from traditional NAS/NFS storage to Ceph.
>
> The problem with POSIX file systems and dovecot is that e.g. with mdbox
> only around ~20% of the IO operations are READ/WRITE, the rest are
> metadata IOs. You will not change this with using CephFS since it will
> basically behave the same way as e.g. NFS.
>
> We decided to develop librmb to store emails as objects directly in
> RADOS instead of CephFS. The project is still under development, so you
> should not use it in production, but you can try it to run a POC.
>
> For more information check out my slides from Ceph Day London 2018:
> https://dalgaaf.github.io/cephday-london2018-emailstorage/#/cover-page
>
> The project can be found on github:
> https://github.com/ceph-dovecot/
>
> -Danny
>
> Am 16.05.2018 um 20:37 schrieb Webert de Souza Lima:
> > I'm sending this message to both dovecot and ceph-users ML so please
> don't
> > mind if something seems too obvious for you.
> >
> > Hi,
> >
> > I have a question for both dovecot and ceph lists and below I'll explain
> > what's going on.
> >
> > Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox),
> when
> > using sdbox, a new file is stored for each email message.
> > When using mdbox, multiple messages are appended to a single file until
> it
> > reaches/passes the rotate limit.
> >
> > I would like to understand better how the mdbox format impacts on IO
> > performance.
> > I think it's generally expected that fewer larger file translate to less
> IO
> > and more troughput when compared to more small files, but how does
> dovecot
> > handle that with mdbox?
> > If dovecot does flush data to storage upon each and every new email is
> > arrived and appended to the corresponding file, would that mean that it
> > generate the same ammount of IO as it would do with one file per message?
> > Also, if using mdbox many messages will be appended to a said file
> before a
> > new file is created. That should mean that a file descriptor is kept open
> > for sometime by dovecot process.
> > Using cephfs as backend, how would this impact cluster performance
> > regarding MDS caps and inodes cached when files from thousands of users
> are
> > opened and appended all over?
> >
> > I would like to understand this better.
> >
> > Why?
> > We are a small Business Email Hosting provider with bare metal, self
> hosted
> > systems, using dovecot for servicing mailboxes and cephfs for email
> storage.
> >
> > We are currently working on dovecot and storage redesign to be in
> > production ASAP. The main objective is to serve more users with better
> > performance, high availability and scalability.
> > * high availability and load balancing is extremely important to us *
> >
> > On our current model, we're using mdbox format with dovecot, having
> > dovecot's INDEXes stored in a replicated pool of SSDs, and messages
> stored
> > in a replicated pool of HDDs (under a Cache Tier with a pool of SSDs).
> > All using cephfs / filestore backend.
> >
> > Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
> > (10.2.9-4).
> >  - ~25K users from a few thousands of domains per cluster
> >  - ~25TB of email data per cluster
> >  - ~70GB of dovecot INDEX [meta]data per cluster
> >  - ~100MB of cephfs metadata per cluster
> >
> > Our goal is to build a single ceph cluster for storage that could expand
> in
> > capacity, be highly available and perform well enough. I know, that's
> what
> > everyone wants.
> >
> > Cephfs is an important choise because:
> >  - there can be multiple mountpoints, thus 

Re: install dovecot 2.2.35 debian jessie

2018-04-26 Thread Webert de Souza Lima
Got 2.2.34 running using debian strech image + strech-backports repos!


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Thu, Apr 26, 2018 at 9:37 PM, Webert de Souza Lima <webert.b...@gmail.com
> wrote:

> Oh thank you Cedric, I hadn't check that. So 2.2.35 is unstable, huh?
> I'll deploy 2.2.34 instead.
>
> Thank you!
>
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
> On Thu, Apr 26, 2018 at 7:51 PM, Cedric M <cedric.mali...@gmail.com>
> wrote:
>
>> Hi,
>> did you check in unstable ?
>> https://tracker.debian.org/pkg/dovecot
>>
>> 2018-04-26 16:43 GMT-04:00 Webert de Souza Lima <webert.b...@gmail.com>:
>>
>>> hmm I think I should use stretch instead of jessie, OR I should use a
>>> stretch-backport repos, right?
>>>
>>>
>>> Regards,
>>>
>>> Webert Lima
>>> DevOps Engineer at MAV Tecnologia
>>> *Belo Horizonte - Brasil*
>>> *IRC NICK - WebertRLZ*
>>>
>>> On Thu, Apr 26, 2018 at 5:39 PM, Webert de Souza Lima <
>>> webert.b...@gmail.com> wrote:
>>>
>>>> Hi, I can't figure how to install latest stable dovecot version 2.2.35
>>>> in Debian Jessie
>>>>
>>>> If I follow this guide <https://repo.dovecot.org/>, it ends up
>>>> installing 2.3
>>>> If I follow this guide
>>>> <https://wiki.dovecot.org/PrebuiltBinaries#preview>, it ends up
>>>> installing either 2.2.13 if I use "stable" or 2.2.36 alpha if I use 
>>>> "jessie"
>>>>
>>>> I see that 2.2.35 seems to be missing here too:
>>>> http://xi.dovecot.fi/debian/pool/jessie-auto/dovecot-2.2/
>>>>
>>>> Thanks.
>>>>
>>>> Regards,
>>>>
>>>> Webert Lima
>>>> DevOps Engineer at MAV Tecnologia
>>>> *Belo Horizonte - Brasil*
>>>> *IRC NICK - WebertRLZ*
>>>>
>>>
>>>
>>
>


Re: install dovecot 2.2.35 debian jessie

2018-04-26 Thread Webert de Souza Lima
Oh thank you Cedric, I hadn't check that. So 2.2.35 is unstable, huh?
I'll deploy 2.2.34 instead.

Thank you!


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Thu, Apr 26, 2018 at 7:51 PM, Cedric M <cedric.mali...@gmail.com> wrote:

> Hi,
> did you check in unstable ?
> https://tracker.debian.org/pkg/dovecot
>
> 2018-04-26 16:43 GMT-04:00 Webert de Souza Lima <webert.b...@gmail.com>:
>
>> hmm I think I should use stretch instead of jessie, OR I should use a
>> stretch-backport repos, right?
>>
>>
>> Regards,
>>
>> Webert Lima
>> DevOps Engineer at MAV Tecnologia
>> *Belo Horizonte - Brasil*
>> *IRC NICK - WebertRLZ*
>>
>> On Thu, Apr 26, 2018 at 5:39 PM, Webert de Souza Lima <
>> webert.b...@gmail.com> wrote:
>>
>>> Hi, I can't figure how to install latest stable dovecot version 2.2.35
>>> in Debian Jessie
>>>
>>> If I follow this guide <https://repo.dovecot.org/>, it ends up
>>> installing 2.3
>>> If I follow this guide
>>> <https://wiki.dovecot.org/PrebuiltBinaries#preview>, it ends up
>>> installing either 2.2.13 if I use "stable" or 2.2.36 alpha if I use "jessie"
>>>
>>> I see that 2.2.35 seems to be missing here too:
>>> http://xi.dovecot.fi/debian/pool/jessie-auto/dovecot-2.2/
>>>
>>> Thanks.
>>>
>>> Regards,
>>>
>>> Webert Lima
>>> DevOps Engineer at MAV Tecnologia
>>> *Belo Horizonte - Brasil*
>>> *IRC NICK - WebertRLZ*
>>>
>>
>>
>


Re: Error: Corrupted dbox file

2018-01-19 Thread Webert de Souza Lima
Hello Florent,

How did you proceed with the upgrade? Did you follow the recommended steps
guide to upgrade ceph? (mons first, then OSDs, then MDS)
Did you interrupt dovecot before upgrading the MDS specially? Did you
remount the filesystem? Did you upgrade the ceph client too?

Give people the complete scene and someone might be able to help you. Ask
on ceph-users list too.



Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Thu, Jan 18, 2018 at 8:41 AM, Florent B  wrote:

> Hi list,
>
> I'm sorry to come back with my problem. I'm pretty sure it's not
> dovecot-related, but can someone help me to bring some debug information
> to Ceph developers ?
>
> For example, how writes are handled in Dovecot, and what kind of
> corruption it is according to error messages ?
>
>
> On 13/12/2017 16:19, Florent B wrote:
> > Hi,
> >
> > I use Dovecot (last released version), 2 backends and 1 director, each
> > user account handled by a single assigned backend.
> >
> > I use CephFS filesystem for messages (FUSE client).
> >
> > Since Ceph upgrade from Kraken to Luminous, I have a lot of erreurs
> > "Error: Corrupted dbox file" on a single (large) mail account.
> >
> > I know the problem seems to come from Ceph, but maybe someone here can
> > help me diagnose the situation.
> >
> > The error is exactly : EOF reading msg header (got 0/30 bytes)
> >
> > Dovecot backends are configured like this :
> >
> > mmap_disable = yes
> > mail_fsync = optimized
> > mail_nfs_storage = no
> > mail_nfs_index = no
> > mdbox_rotate_size = 12M
> >
> > CephFS supports file locking, and 2 backends never write the same file
> > because each user is assigned to a backend.
> >
> > Did you ever see this problem with others FS maybe ?
> >
> > It seems problem disappeared when I disable pagecache for the Fuse mount.
> >
> > Thank you for your help.
> >
> >
>
>


Re: install dovecot 2.2.35 debian jessie

2018-04-26 Thread Webert de Souza Lima
hmm I think I should use stretch instead of jessie, OR I should use a
stretch-backport repos, right?


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Thu, Apr 26, 2018 at 5:39 PM, Webert de Souza Lima <webert.b...@gmail.com
> wrote:

> Hi, I can't figure how to install latest stable dovecot version 2.2.35 in
> Debian Jessie
>
> If I follow this guide <https://repo.dovecot.org/>, it ends up installing
> 2.3
> If I follow this guide <https://wiki.dovecot.org/PrebuiltBinaries#preview>,
> it ends up installing either 2.2.13 if I use "stable" or 2.2.36 alpha if I
> use "jessie"
>
> I see that 2.2.35 seems to be missing here too: http://xi.dovecot.fi/
> debian/pool/jessie-auto/dovecot-2.2/
>
> Thanks.
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>


install dovecot 2.2.35 debian jessie

2018-04-26 Thread Webert de Souza Lima
Hi, I can't figure how to install latest stable dovecot version 2.2.35 in
Debian Jessie

If I follow this guide , it ends up installing
2.3
If I follow this guide ,
it ends up installing either 2.2.13 if I use "stable" or 2.2.36 alpha if I
use "jessie"

I see that 2.2.35 seems to be missing here too:
http://xi.dovecot.fi/debian/pool/jessie-auto/dovecot-2.2/

Thanks.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


Re: [ceph-users] dovecot + cephfs - sdbox vs mdbox

2018-10-04 Thread Webert de Souza Lima
Hi, bring this up again to ask one more question:

what would be the best recommended locking strategy for dovecot against
cephfs?
this is a balanced setup using independent director instances but all
dovecot instances on each node share the same storage system (cephfs).

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Wed, May 16, 2018 at 5:15 PM Webert de Souza Lima 
wrote:

> Thanks Jack.
>
> That's good to know. It is definitely something to consider.
> In a distributed storage scenario we might build a dedicated pool for that
> and tune the pool as more capacity or performance is needed.
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
>
> On Wed, May 16, 2018 at 4:45 PM Jack  wrote:
>
>> On 05/16/2018 09:35 PM, Webert de Souza Lima wrote:
>> > We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
>> > backend.
>> > We'll have to do some some work on how to simulate user traffic, for
>> writes
>> > and readings. That seems troublesome.
>> I would appreciate seeing these results !
>>
>> > Thanks for the plugins recommendations. I'll take the change and ask you
>> > how is the SIS status? We have used it in the past and we've had some
>> > problems with it.
>>
>> I am using it since Dec 2016 with mdbox, with no issue at all (I am
>> currently using Dovecot 2.2.27-3 from Debian Stretch)
>> The only config I use is mail_attachment_dir, the rest lies as default
>> (mail_attachment_min_size = 128k, mail_attachment_fs = sis posix,
>> ail_attachment_hash = %{sha1})
>> The backend storage is a local filesystem, and there is only one Dovecot
>> instance
>>
>> >
>> > Regards,
>> >
>> > Webert Lima
>> > DevOps Engineer at MAV Tecnologia
>> > *Belo Horizonte - Brasil*
>> > *IRC NICK - WebertRLZ*
>> >
>> >
>> > On Wed, May 16, 2018 at 4:19 PM Jack  wrote:
>> >
>> >> Hi,
>> >>
>> >> Many (most ?) filesystems does not store multiple files on the same
>> block
>> >>
>> >> Thus, with sdbox, every single mail (you know, that kind of mail with
>> 10
>> >> lines in it) will eat an inode, and a block (4k here)
>> >> mdbox is more compact on this way
>> >>
>> >> Another difference: sdbox removes the message, mdbox does not : a
>> single
>> >> metadata update is performed, which may be packed with others if many
>> >> files are deleted at once
>> >>
>> >> That said, I do not have experience with dovecot + cephfs, nor have
>> made
>> >> tests for sdbox vs mdbox
>> >>
>> >> However, and this is a bit out of topic, I recommend you look at the
>> >> following dovecot's features (if not already done), as they are awesome
>> >> and will help you a lot:
>> >> - Compression (classic, https://wiki.dovecot.org/Plugins/Zlib)
>> >> - Single-Instance-Storage (aka sis, aka "attachment deduplication" :
>> >> https://www.dovecot.org/list/dovecot/2013-December/094276.html)
>> >>
>> >> Regards,
>> >> On 05/16/2018 08:37 PM, Webert de Souza Lima wrote:
>> >>> I'm sending this message to both dovecot and ceph-users ML so please
>> >> don't
>> >>> mind if something seems too obvious for you.
>> >>>
>> >>> Hi,
>> >>>
>> >>> I have a question for both dovecot and ceph lists and below I'll
>> explain
>> >>> what's going on.
>> >>>
>> >>> Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox),
>> >> when
>> >>> using sdbox, a new file is stored for each email message.
>> >>> When using mdbox, multiple messages are appended to a single file
>> until
>> >> it
>> >>> reaches/passes the rotate limit.
>> >>>
>> >>> I would like to understand better how the mdbox format impacts on IO
>> >>> performance.
>> >>> I think it's generally expected that fewer larger file translate to
>> less
>> >> IO
>> >>> and more troughput when compared to more small files, but how does
>> >> dovecot
>> >>> handle that with mdbox?
>> >>> If dovecot does flush data to storage upon each and every new email is
>> >>> arrived and appended