Re: User-configurable time-based mail deletion in specific folders

2024-02-25 Thread Steven Varco

> Am 25.02.2024 um 16:58 schrieb William Edwards via dovecot 
> :
> 
>> 
>> Op 25 feb 2024 om 16:51 heeft Steven Varco  het 
>> volgende geschreven:
>> 
>> 
>>>> Am 25.02.2024 um 09:38 schrieb Rupert Gallagher via dovecot 
>>>> :
>>>> 
>>>> 
>>>> Things like this should be done locally on the Mailclient (MUA), IMHO.
>>> 
>>> If you are a company, then you must delete old e-mails automatically, by 
>>> GDPR
>>> law.
>> 
>> In this case it comes back to that this is better done by an external script.
>> 
>> First, dovecot is a global product, where not every company has to take care 
>> about european nonense laws. :P
> 
> Ouch. GDPR is objectively not nonsense. 

I forgot to write that this is my (humble) opinion.
Personally, if write an email to anyone I don’t care what happens with that 
mail and how long it is stored. If I would care, I would not write that email.

OT: Beyond that I also hate clicking on „accept cookies“ on any f*cking web 
page I visit. It is something that I could just as good set in my browsers 
cookie saving policy, if I cared.

Steven

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: User-configurable time-based mail deletion in specific folders

2024-02-25 Thread Steven Varco

> Am 25.02.2024 um 09:38 schrieb Rupert Gallagher via dovecot 
> :
> 
> 
>> Things like this should be done locally on the Mailclient (MUA), IMHO.
> 
> If you are a company, then you must delete old e-mails automatically, by GDPR
> law.

In this case it comes back to that this is better done by an external script.

First, dovecot is a global product, where not every company has to take care 
about european nonense laws. :P
Second, I would not want dovecot to become a „fullsize all in one solution for 
everything“ (like MS Exchange). I like the concept of doing one thing only, but 
doing this good.

Steven


> 
> 
>  Original Message ----
> On Feb 21, 2024, 23:25, Steven Varco < dovecot@bbs.varco.ch> wrote:
> 
>> Am 21.02.2024 um 21:25 schrieb Peter Reinhold : > > Hi > I have been
> wondering about if Dovecot has a feature that would allow users to > setup a
> rule for a given folder, that mails older than X days should be > deleted? > 
> Or
> is > this something that would need to be done by an external script? Yes. It
> goes beyond of what I expect from an IMAP server. > I have looked a bit at
> autoexpunge, and while the basic feature looks to be > what I need, it doesn't
> seem to be configurable down to a specific folder on a > single user. Things
> like this should be done locally on the Mailclient (MUA), IMHO. Steven -
> - https://steven.varco.ch/ https://www.tech-island.com/
> ___ dovecot mailing list -
> - dovecot@dovecot.org To unsubscribe send an email to 
> dovecot-le...@dovecot.org
> 
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: User-configurable time-based mail deletion in specific folders

2024-02-21 Thread Steven Varco


> Am 21.02.2024 um 21:25 schrieb Peter Reinhold :
> 
> Hi
> I have been wondering about if Dovecot has a feature that would allow users to
> setup a rule for a given folder, that mails older than X days should be
> deleted?

> Or is
> this something that would need to be done by an external script?

Yes. It goes beyond of what I expect from an IMAP server.


> I have looked a bit at autoexpunge, and while the basic feature looks to be
> what I need, it doesn't seem to be configurable down to a specific folder on a
> single user.

Things like this should be done locally on the Mailclient (MUA), IMHO.

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: submission_add_received_header option?

2024-02-04 Thread Steven Varco
Hi Ellie

> Am 02.02.2024 um 23:02 schrieb Ellie McNeill :
> 
> Sorry, I seem to have missed that. I hope this isn't a silly question, but 
> I'm wondering what the difference between 'regular' Dovecot and the 'CE'/3.0 
> edition is? I can't seem to find much information on this. Why are there 
> different versions?

CE/3.0 is the „commercial/enterprise“ version of dovecot, where you need to pay 
money for a support contract, whereas 2.3 is the „community“ version, which is 
available for free. These version are basically the same for most aspect, but 
some features beeing available in the paid enterprise version only.

> PS: I'm aware of the arguments regarding privacy/RFCs with regards to mail 
> headers. This is just a small mail server for personal use though.

In this case it matters even less. „Hiding“ vital parts of a technology, for 
imagined „security“ is never a good/right option.

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Please do not remove replication

2024-01-24 Thread Steven Varco
Although I’m also a very happy dovecot replication user, I don’t think this 
decision will be reverted, sadly.

However, despite of messing with NFS, I will try setting up a three-node 
GlusterFS Cluster to give redundant storage to dovecote as mail store and hope 
it performs well enough… Has anyone else such a setup (or alternatively with 
Ceph) in production?

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 


> Am 24.01.2024 um 23:33 schrieb Gerben Wierda :
> 
> Respectfully, I would like to ask: please do not remove replication, please
> rethink this.
> 
> Currently, replication is my life saver. I run two postfix/dovecot combos (on
> different operating systems), with dovecot synchronising via replication. Both
> are behind a HAProxy running on the router (OPNsense), one as active, one as
> backup.
> 
> If one of the two fails, the other takes over, and when it comes up again
> everything works fine and is up to date. I have had these kinds of system
> failures (very hard to find and turned to be hardware related) and it was the
> replication that made me survive the issues (even when I was far away from my
> systems). Mail for my small group of users (about 8) never went down, no mail
> message was ever lost, no manual interventions to sync were ever needed.
> 
> If I want to create the same level of availability without replication, I need
> those two dovecots to use shared (NFS cluster) storage. But then, I have
> another single point of failure (NFS storage) again. So, I need two separate
> NFS machines that synchronise, Apart from the nightmare of making NFS secure,
> it means that I need to double my hardware (from two systems to four) to be
> protected against hardware failure (which is my goal).
> 
> The replication service is the perfect small scale solution. Together with
> HAProxy, it enables HA in the most simple and effective way. Going the 'NFS
> cluster' route is not feasible for me, so if replication is removed and I am
> forced to upgrade, I will lose HA.
> 
> So please, take small scale users like me into account.
> 
> Gerben Wierda (LinkedIn, Mastodon)
> R_IT_Strategy (main site)
> Book: Chess_and_the_Art_of_Enterprise Architecture
> Book: Mastering_ArchiMate
> YouTube_Channel
> 
> On 16 Jul 2023, at 18:54, Aki Tuomi via dovecot 
> wrote:
> 
> Hi!
> 
> Yes, director and replicator are removed, and won't be available for
> pro users either.
> 
> For NFS setups (or similar shared setups), we have documented a way
> to use Lua to run a director-like setup, see
> 
> https://doc.dovecot.org/3.0/configuration_manual/howto/
> director_with_lua/
> 
> Regards to replication, doveadm sync is not being removed. So you can
> still run doveadm sync on your system to have a primary / backup
> setup.
> 
> Aki
> 
>  On 16/07/2023 18:34 EEST William Edwards via dovecot
>   wrote:
> 
> 
>  Top posting because nothing specific to reply to, sorry.
>  Not exactly sure, but there’s another thread about the
>  removal of Director in favour of Dovecot Pro on 3.x.
>  Perhaps this change is related.
> 
>  William Edwards
> 
>   Op 16 jul. 2023 om 16:33 heeft Daniele
>het volgende geschreven:
> 
>   Hello,
> 
>   Just like Vladimir, I'm a bit concerned about
>   this change, and I'd really appreciate if someone
>   could let us know if the replication feature
>   (that works so well!) will be replaced or
>   removed; and, in case of removal, what would be
>   recommended replacement?
>   Thanks in advance and best regards,
>   Daniele
> 
>On 09-Jul-23 9:36 PM, Vladimir Mishonov
>via dovecot wrote:
>Hello everyone.
> 
>Just saw this commit in the official
>Github repo:
> 
>https://github.com/dovecot/core/commit/
>4c04e4c30fd4817a8b0e11d04d9681173f696f41#diff-
>
> 5f643d8b0d1eea65d0f3c749d14d42b25a9d60f0f149bface862f5ff348412c8
> 
> 
>Looking at the commit details, it
>appears that it completely removes the
>replication feature. I'm a bit
>perplexed by this change and am not
>sure what might be the justification
>for it. Personally, I find replication
>to be very useful, as it allows me to
>maintain a synchronized mirror of all
>of my mailboxes on my home server, for
>use as backup in case the primary
>server goes down for some reason.
> 
>Perhaps there's some sort of
>replacement being planned for this
>feature? Or 

Re: [EXT] Replication going away?

2023-11-19 Thread Steven Varco
Does anyone already have a dovecot (CE with Maildir) setup running using shared 
storage (i.ex. GlusterFS) underneath?

This will be my current „migration plan“ to dovecot nmot supporting replication 
anymore:

2 x Loadblancers (accross two sites) with keepalived and haproxy
3x GlusterFS nodes
3x Galera Cluster mariadb nodes
2x dovecot IMAP with Postfix SMTP (wihout director there’s no need anynmore to 
use dovecot SMTP implementation)

The Loadbalancer-, Galera- and Gluster modes are shared wit the existing web 
service infrastructure, so not dedicated to dovecot.

So basically, this way I will not have to deploy a single node more without 
director/repication.

If the HA storage and database clusters will be used for mail dedicated, this 
makes 11 nodes for a fully HA cluster accross two sites. With virtualization 
infrastructure that should get very expensive.

Or am I missing something?

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 


> Am 18.11.2023 um 18:10 schrieb Dean Carpenter :
> 
> On 2023-07-20 5:31 pm, deano-dove...@areyes.com wrote:
> On 2023-07-19 4:08 pm, Gerald Galster wrote:
> A 50-100 mailbox user server
> will run Dovecot CE just
> fine. Pro would be overkill.
>What is overkill? I always thought it
>had a bit more features and support.
>   For Pro 2.3, you need (at minimum) 7 Dovecot
>   nodes + HA authentication + HA storage +
>   (minimum) 3 Cassandra nodes if using object
>   storage. This is per site; most of our customers
>   require data center redundancy as well, so
>   multiply as needed. And this is only email
>   retrieval; this doesn't even begin to touch upon
>   email transfer. Email high availability isn't
>   cheap. (I would argue that if you truly need this
>   sort of carrier-grade HA for 50 users, it makes
>   much more sense to use email as-a-service than
>   trying to do it yourself these days. Unless you
>   have very specific reasons and a ton of cash.)
>  High availability currently is cheap with a small two
>  server setup:
>  You need 3 servers or virtual machines: dovecot (and maybe
>  postfix) running on two of them and mysql galera on all
>  three.
>  This provides very affordable active/active geo-redundancy.
> 
>  No offence, it's just a pity to see that feature
>  disappering.
> That's exactly how my own 3-node personal setup works.  I shove all I
> can into mariadb with galera (dovecot auth, spamassassin, etc) across
> the 3 nodes.  Dovecot replication keeps the 2 dovecot instances in
> sync, the 3rd node is the quorum node for galera.
> This is is on 3 cheap VPS' in 3 locations around the US.  Mesh VPN
> between them for the encrypted connectivity.  It works, and it works
> well.
> And now replication is going away ?  A perfectly-well working feature
> is being removed ??  It's not as if it's a problematic one, nor would
> it interfere with anything if it remained ...
> I only see a couple of routes forward, at least for me. 
> * Stay on the last dovecot release that supports replication. 
> * Switch away from dovecot and cobble something else together. 
> * Move to gmail 
> The removal of replication feels very arbitrary.
> At what version is replication being removed ?  2.4 I think ?  Or, perhaps the
> question is which is the last version that WILL still have replication ?
> For now I'll be going with option 1 above, staying with the last version to
> support it.  Sad.
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-18 Thread Steven Varco

> While I understand it takes effort to maintain the replication plugin, this 
> is especially problematic for small active/active high-availability 
> deployments.
> I guess there are lots of servers that use replication for just 50 or 100 
> mailboxes. Cloudstorage (like S3) would be overkill for these.
> 
> Do you provide dovecot pro subscriptions for such small deployments?

I’m running such a setup for 5-10 mailboxes.. :D


-- 
https://steven.varco.ch/ 

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-17 Thread Steven Varco

> However, I understand some had a better experience with it. I am curious
> if someone will fork dovecot and restore the beloved feature.

With all the recent actions going on, clearly targeting in getting more paying 
„pro“ customers (and nothing else!) the dovecot project will walk on thin ice 
and there is a risk (or chance :) ) that it will get forked, all the good 
dovecot developers walking over to the fork and dovecot let down to a product 
no one uses anymore.
It happend before, i.ex. nagios/icinga, CentOS/Alma-/Rocky Linux and other big, 
commonly used software projects.

I for my part will also try to stay as long as possible on dovecot 2.x with 
director and replicator, hoping for a fork which includes those features again 
in a distant future.

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: el9 rpms

2023-06-20 Thread Steven Varco
Sad, had planed to migrate my dovecot cluster (2x director 2x IMAP from 2.2.36 
(CentOS 7) to 2.4 (AlmaLinux 9, when 2.4 is released).

So, does anyone already built a STABLE(!) alternative for director?

@Aki: Is dovecot IMAP 2.4 expected to still work with dovecot director 2.2 (or 
2.3)?
In this case I may just upgrade the IMAP Servers and leave the 
director/loadbalancer servers for some time.

In general: Is the upgrade from 2.2 to 2.3 or 2.4. smooth or are there a lot of 
breaking changes and the upgrade would have to be planend thoroughly?

thanks,
Steven


> Am 01.06.2023 um 12:32 schrieb Aki Tuomi via dovecot :
> 
> 
>> On 01/06/2023 12:16 EEST Marc  wrote:
>> 
>> 
>>> 
>>> 
 On 01/06/2023 11:37 EEST Marc  wrote:
 
 
 Is there a specific reason why there are (still) no el9 packages?
 https://repo.dovecot.org/ce-2.3.17/centos/
 https://repo.dovecot.org/ce-2.3-latest/rhel/
 
>>> 
>>> Because 2.3 still does not support RHEL9. It will be added for 2.4.
>>> 
>> 
>> Oh shit, I did not expect there was such a huge dependency on the os for 
>> dovecot. I have already removed some old test el8 stuff and upgrading from 
>> el7 to el9. So nothing with el8. Is it really a big problem to get dovecot 
>> 2.3 to work on el9? I have the impression the differences between el8 and 
>> el9 are not so significant. For now the srpms I tried from el8 seem ok to 
>> compile on el9.
>> 
> 
> There are some stuff that we rather fix for 2.4 than 2.3. I am aware of a 
> patch for TLS code in Dovecot that redhat uses.
> 
>> This 2.4 will have this director removed not?
> 
> Yes.
> 
> Aki
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: What is the current state of High Availability Dovecot ?

2022-04-10 Thread Steven Varco
What do you mean bei „High Availability Dovecot“?
Dovecot HA using director and poolmon is working very well since quite some 
time now.
I recently just migrated a single instance dovecot to a HA setup using 2x 
Dovecot Director and 2x Dovecot IMAP servers.

Obviously High Availability setups are by nature more complicated then single 
host installations and there is also not „one in box“ solution, speaking you 
usually combine multiple components to build a HA setup.

For the basic infastructure you need four servers, two running dovecot director 
and two running dovecot IMAP.
To ensure IMAP requests are always going to one of the two directors you will 
probably want to use something like keepalived/CARP.

Since Dovexot 2.3 there is also a SMTP submission server included, before you 
will have to setup haproxy on the director/keepalived hosts additionally.

Last, for SMTP you can setup two SMTP Servers (usually postfix) on the IMAP 
hosts and control their access with the DNS MX records. No need to run special 
software as DNS/SMTP has high availabilty alrerdy included.

cheers,
Steven

-- 
https://www.tech-island.com/

> Am 07.04.2022 um 21:45 schrieb White, Daniel E. (GSFC-770.0)[NICS] 
> :
> 
> … without going to too much fuss ?
> 
> Searching the Internet produces a lot of old results and many overly 
> complicated results.
> 
> My only complication is that I am using PostfixAdmin for mailbox management, 
> and all the mailboxes are virtual.
> 
> Thanks.
> 



Re: macOS ManageSieve client?

2022-02-20 Thread Steven Varco
>> How do people use this from their macOS clients? For this, the ManageSieve 
>> protocol exists and this is implemented by dovecot-sieve, but other than 
>> installing roundcube and offering a web-based mail client that also supports 
>> ManageSieve, is there another way? A ManageSieve client that directly runs 
>> on the macOS client and interfaces with dovecot-sieve on the server?


Unfortunately, Apple's mail client (mail.app) has no sieve management features 
included, so I use Roundcube Webmail to manage my sieve rules.

Steven

-- 
https://steven.varco.ch/ 



Re: noob maildir question

2022-01-24 Thread Steven Varco
Hi Mik

> I would like to ask if it is an acceptable practice to manage messages in the 
> maildir as a file (move them from one folder to another) while dovecot is in 
> stop state thinking that it will be rebuild to the next imap user login

Maildir is actually designed to do so, as the storage can also lie on NFS and 
external programms bring mail in.

Speak, you can move and edit files just like normal on the filesystem (even 
when doevecot is running) and dovecot will rebuild it’s index accordingly.

However for "mass actions", like moving a lot of mail files, specific IMAP 
tools (dovecot provides a bunch of them) might be the better choise.

Steven

-- 
https://www.tech-island.com/



Re: Stop used or not

2022-01-21 Thread Steven Varco


> Am 20.01.2022 um 19:13 schrieb mau...@gmx.ch:
> 
> Sieve scripts running fine, but please what would mean the “Stop”, if I have 
> multiple Policy that will run true, I need one stop to finish this per 
> section?
>  
> if header :is ["from","bcc","to"] [“Email-Address-one”, ”Email-Address-Two”, 
> ”and so on”] {
>   fileinto ".\Magic-FolderName";
> stop;
> }


Without „stop;“ at the end and if you had a second rule like for example:
if header :is ["from","bcc","to"] [“Email-Address-one”, ”Email-Address-Two”, 
”and so on”] {
  fileinto „.\Different-FolderName“;
}

the Mail would get copied in BOTH folders: „Magic-FolderName“ AND 
Different-FolderName.

If you want to STOP further proccessing of that mail, once it gets proccessed 
by a rule the first time, you use „stop;“ in the rule.
This means, even if a further rule would match the mail, it will not get 
proccessed by that rule anymore.

Steven

-- 
https://steven.varco.ch/ 

Re: Dovecot Director: Preferred backend server

2021-08-31 Thread Steven Varco
Hi Aki

Thanks for pointing  out the tag feature which sound really interesting in the 
first place.

However, if I understand the documentation correctly:
> With tags you can use a single director ring to serve multiple backend 
> clusters. Each backend cluster is assigned a tag name, which can be anything 
> you want. By default everything has an empty tag. A passdb lookup can return 
> "director_tag" field containing the wanted tag name. If there aren't any 
> backend servers with the wanted tag, it's treated the same as if there aren't 
> any backend servers available (= wait for 30 secs for a backend and
> then return temporary failure).

As of my understanding, this only helps if there are multiple IMAP _clusters_ 
in the doveadm ring.
In my case I have only one cluster (with two IMAP _servers_) and would want to 
go to a specific server, failing over to another if that is unavailable.
Now if I have the following scenario:

# Director Server
(DEV) root@lb01 [~] # doveadm director status
mail server ip tag  vhosts state state changed users
mx01.example.com   mx01 100up- 0
mx02.example.com   mx02 100up- 1

# IMAP Server
(DEV) root@mx01 [~] # doveadm user 't...@example.com'
field   value
uid 1025
gid 12
home/srv/mail/example.com/test
mailmaildir:~/Maildir
maildir example.com/test/
mail_home   /srv/mail/example.com/test
quota_rule  *:storage=20480
sieve_dir   /srv/mail/example.com/test/sieve
director_tagmx01

Than user 't...@example.com‘ would go to the backend host mx01.example.com, 
BUT, if mx01.example.com goes down, it would probably fail, because user 
't...@example.com‘ wants tag „mx01“, which is now down and the only server with 
that tag?


By the way, I did a quick live test and it does not even seem to work, when 
both hosts are up, failing with the log entry on the dovecot server:
Aug 31 11:11:11 lb01 dovecot: director: Error: director: User t...@example.com 
host lookup failed: Timeout because no hosts - queued for 30 secs (Ring synced 
for 385 secs, hash=1561836376)

Do see what I’m missing out here?
Using dovecot 2.2.36 (1f10bfa63) on both the directror and IMAP backend.

thanks,
Steven

-- 
https://steven.varco.ch/ 

> Am 30.08.2021 um 19:20 schrieb Aki Tuomi :
> 
> 
>> On 30/08/2021 19:09 Steven Varco  wrote:
>> 
>> 
>> Hi All
>> 
>> I have a dovecot cluster with directror and two IMAP Servers behind.
>> 
>> Since they are in geographical different locations I would like to have 
>> users to go to a specific IMAP backend server (if both are up) and only 
>> switch to the other if one goes down (failover).
>> 
>> As to my current knowledge the PassDB extra field „host=„ is not suitable in 
>> this case as it would never route the client to a different backend, even if 
>> the „user specific backend“ would be down.
>> 
>> Is their a way in dovecot to achive this? As far as I searched the 
>> documentation I could not find any information on this so far.
>> 
>> If not, it may also help if I could get certain users to „initially" go to a 
>> specific backend (since director usually routes a client/user to the same 
>> backend server it initially connects) and therefore it would be interesting 
>> to know how dovecot director chooses wether a user goes to server1 or 
>> server2?
>> And if a client already gets to server2, how to bring it „back“ to server1?
>> 
>> thanks in advance,
>> Steven
>> 
>> -- 
>> https://steven.varco.ch/ 
>> https://www.tech-island.com/
> 
> 
> Hi!
> 
> Use dovecot director tag feature. You can match users with tag= to a specific 
> backend@tag.
> 
> Aki



Re: Dovecot Director: Preferred backend server

2021-08-30 Thread Steven Varco
HAProxy is fundamentally different as it operates on connections only, which is 
not what you usually want on IMAP Servers.
Instead you want to route all connections from the same USER to the same server 
and for this you must have a layer-7 proxy like dovecot director.

The implication with something like HAProxy would be that a user may has 
severall connections from different devices (Desktop mailclinet, Smartphone, 
Tablet, etc.) and if these (indpendent) connections go to seperate backend 
servers, it will cause issues.

-- 
https://steven.varco.ch/ 


> Am 30.08.2021 um 18:56 schrieb dove...@ptld.com:
> 
>> I have a dovecot cluster with directror and two IMAP Servers behind.
>> Since they are in geographical different locations I would like to
>> have users to go to a specific IMAP backend server (if both are up)
>> and only switch to the other if one goes down (failover).
>> As to my current knowledge the PassDB extra field „host=„ is not
>> suitable in this case as it would never route the client to a
>> different backend, even if the „user specific backend“ would be down.
>> Is their a way in dovecot to achive this? As far as I searched the
>> documentation I could not find any information on this so far.
>> If not, it may also help if I could get certain users to „initially"
>> go to a specific backend (since director usually routes a client/user
>> to the same backend server it initially connects) and therefore it
>> would be interesting to know how dovecot director chooses wether a
>> user goes to server1 or server2?
>> And if a client already gets to server2, how to bring it „back“ to server1?
> 
> 
> Have you looked into HAProxy? Don't know if it the answer you seek but it 
> allows for sticky connections and does keep alive checking to stop routing to 
> a non-responsive server.
> https://www.haproxy.org/



Dovecot Director: Preferred backend server

2021-08-30 Thread Steven Varco
Hi All

I have a dovecot cluster with directror and two IMAP Servers behind.

Since they are in geographical different locations I would like to have users 
to go to a specific IMAP backend server (if both are up) and only switch to the 
other if one goes down (failover).

As to my current knowledge the PassDB extra field „host=„ is not suitable in 
this case as it would never route the client to a different backend, even if 
the „user specific backend“ would be down.

Is their a way in dovecot to achive this? As far as I searched the 
documentation I could not find any information on this so far.

If not, it may also help if I could get certain users to „initially" go to a 
specific backend (since director usually routes a client/user to the same 
backend server it initially connects) and therefore it would be interesting to 
know how dovecot director chooses wether a user goes to server1 or server2?
And if a client already gets to server2, how to bring it „back“ to server1?

thanks in advance,
Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 



Re: Major upgrade of mail server

2021-07-18 Thread Steven Varco
Hi Shawn

I recently did a similar upgrade project of a dovecot-postfix-amavis Mailserver 
from CentOS 6 to CentOS 7.

What I did was first creating a new test-server with the new OS.
Than I copied all the maildirs from the productive server to the testing 
server, so I could simulate the upgrade without any interuption for the users.

I then configured the new versions (specially dovecot) as close as possible to 
the old configuration.
=> Never introduce new features as part of an upgrade, as it complicates the 
upgrade and makes it much harder to trace errors.
If you want to use new features of the updated sofware plan the introduction of 
these as a new project after successful migration. For this you can easily 
re-use the test-server you’ve built in the first part to verify they will not 
break anything.

After I was confident, that everything will work, I planed a short downtime for 
the mail service over night, did a full backup and a VM Snapshot (very helpful, 
if you are running a virtualized mailserver!) and just re-installed the new OS, 
upgrading the server in-place.
After the imnstallation and initial configuration, I applied the new 
configuration to all mail components and testet each of them to ensure they are 
working properly.

good luck,
Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 

> Am 08.07.2021 um 03:15 schrieb Shawn Heisey :
> 
> I have a mail server in AWS that is currently running Ubuntu 18.  Every time 
> I log in, I am reminded that I can upgrade to Ubuntu 20.
> 
> On Ubuntu 18, the dovecot version is 3.3.0-1ubuntu0.3.  On Ubuntu 20, it is 
> 2.3.7.2-1ubuntu3.  Many other packages, probably including the mysql server, 
> would also be upgraded.
> 
> Dovecot and Postfix use a postfixadmin database in mysql for users, and 
> postfix is using dovecot-lda to deliver mail.  I am using managesieve from 
> dovecot on roundcube webmail.  As far as I know, my own user is the only one 
> with sieve scripts actually in use ... and I have a LOT of filters/folders 
> for various mailing lists.
> 
> I've been a little bit terrified of doing an upgrade, because I do have a 
> couple of people using my mail server for real work email and I don't want to 
> disrupt them.
> 
> I'm writing today to find out what are the likely pain points I might 
> encounter when doing this kind of major upgrade, and if there is any helpful 
> information that can help me get through those problems.  I'm hoping that it 
> will go smoothly and everything just works.
> 
> Here's the doveconf -n output:
> https://apaste.info/FUgF
> 
> If I have been silly enough to include sensitive data from the config, I 
> would appreciate a heads up so I know what passwords to change.  I did a 
> quick glance and didn't see anything.
> 
> Thanks,
> Shawn



Re: dsync replication fails with No space left on device / Out of memory

2021-07-07 Thread Steven Varco


> Am 07.07.2021 um 10:34 schrieb @lbutlr :
> 
> On 2021 Jul 05, at 02:00, Steven Varco  wrote:
>> I then increased the filesystem size and all the problems suddenly vanished.
> 
> How large was your tmp before and after the change, out of curiosity?

Before it was 128 MB which is admittedly quite low. However, usually no 
compontent ever reachs this limit as temporary files are generally quite low in 
size, so I never had a problem with postfix, amavis, dovecot, or even a LAMP 
stack, and therefore I did not expect that in the first place.

After I have extended the /tmp volume to reasonably 1 GB which should be pretty 
fine for the future. :)

best regards,
Steven



Re: dsync replication fails with No space left on device / Out of memory

2021-07-05 Thread Steven Varco
> Aki Tuomi aki.tuomi at open-xchange.com 
> Fri Jul 2 09:14:47 EEST 2021
> 
> The disk issue is likely that disk space on mail_temp_dir runs out, which is 
> usually /tmp.


Hi Aki

Many thanks for that hint, it actually lead me to the root cause of the 
problem! :)

As during the process the /tmp filesystem fills- and after empties so fast I 
could not even see the filesystem filling up when actively monitoring it with 
the watch command. It took like a microsecond when I could only see that /tmp 
increased somehow and immediately decreased again. Thats why I not noticed this 
in the first place.

I then increased the filesystem size and all the problems suddenly vanished. - 
Not just the "No space left on device“, suppringsly also the error log message: 
„Out of memory“ ist gone now, so they were somehow connected to eachother.

cheers,
Steven

-- 
https://steven.varco.ch/
https://www.tech-island.com/



> Am 02.07.2021 um 07:43 schrieb Jörg Faudin Schulz :
> 
> Hi,
> 
> the memory issue has already been reported, not resolved yet:
> 
> https://www.mail-archive.com/dovecot@dovecot.org/msg83763.html
> 
> 
> the disk-free issue is something different. Increasing memory parameters 
> doesn't help- the sync only crashes later.
> 
> Here, everything seems to be synced fine nevertheless.
> 
> 
> 
> Am 02.07.21 um 02:56 schrieb Harlan Stenn:
>> Inodes?  df -i
>> 
>> On 7/1/2021 5:07 PM, Steven Varco wrote:
>>> Hi All
>>> 
>>> Since I configured dsync replication I get strange errors in the maillog on 
>>> my two mail dovecot nodes:
>>> 
>>> PRIMARY:
>>> Jul  2 01:21:42 mx01.example.com dovecot: doveadm: Error: 
>>> read(mx02.example.com) failed: read(size=3148) failed: Connection reset by 
>>> peer (last sent=mail, last recv=mail (EOL))
>>> 
>>> 
>>> The secondary is more interesting:
>>> 
>>> SECONDARY
>>> Jul  2 01:21:42 mx02 dovecot: doveadm: Error: 
>>> close(-1[istream-seekable.c:237]) failed: No space left on device
>>> Jul  2 01:21:43 mx02 dovecot: doveadm: Fatal: 
>>> pool_system_realloc(268435456): Out of memory
>>> Jul  2 01:21:43 mx02 dovecot: doveadm: Error: Raw backtrace: 
>>> /usr/lib64/dovecot/libdovecot.so.0(+0xa192e) [0x7f2e9be4c92e] -> 
>>> /usr/lib64/dovecot/libdovecot.so.0(+0xa1a0e) [0x7f2e9be4ca0e] -> 
>>> /usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x7f2e9bddc3d3] -> 
>>> /usr/lib64/dovecot/libdo
>>> Jul  2 01:21:43 mx02 dovecot: doveadm: Fatal: master: service(doveadm): 
>>> child 2876 returned error 83 (Out of memory (service doveadm { 
>>> vsz_limit=256 MB }, you may need to increase it) - set CORE_OUTOFMEM=1 
>>> environment to get core dump)
>>> Jul  2 01:21:51 mx02 dovecot: dsync-local(u...@example.com): Error: Raw 
>>> backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0xa192e) [0x7fd56e17e92e] -> 
>>> /usr/lib64/dovecot/libdovecot.so.0(+0xa1a0e) [0x7fd56e17ea0e] -> 
>>> /usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x7fd56e10e3d3] -> /us
>>> Jul  2 01:21:51 mx02 dovecot: dsync-local(u...@example.com): Fatal: master: 
>>> service(doveadm): child 2882 returned error 83 (Out of memory (service 
>>> doveadm { vsz_limit=256 MB }, you may need to increase it) - set 
>>> CORE_OUTOFMEM=1 environment to get core dump)
>>> 
>>> 
>>> The error messages state that disk space and/or memory is a problem, but 
>>> disk space and memory is enough available:
>>> 
>>> mx02 [~] # df -h /srv/mail/
>>> Filesystem   Size  Used Avail Use% Mounted on
>>> /dev/mapper/system-mail   10G  5.7G  4.3G  58% /srv/mail
>>> 
>>> mx02 [~] # free -m
>>>   totalusedfree  shared  buff/cache   
>>> available
>>> Mem:   378916021088 1991097
>>> 1759
>>> Swap:   471  93 378
>>> 
>>> 
>>> I also tried to increase vsz_limit from 256 MB to 512 MB, which did not 
>>> help.
>>> 
>>> 
>>> And for the sake of completness also the connection to the doveadm port 
>>> works well from both nodes:
>>> 
>>> mx01-prod [~] # telnet mx02 14310
>>> Trying 172.20.19.225...
>>> Connected to mx02.
>>> Escape character is '^]'.
>>> ^]
>>> 
>>> 
>>> mx02 [~] # telnet mx01 14310
>>> Trying 172.20.19.251...
>>> Connected to mx01.
>>> Escape character is '^]'.
>>> ^]
>>> 
>>> 
&

dsync replication fails with No space left on device / Out of memory

2021-07-01 Thread Steven Varco
Hi All

Since I configured dsync replication I get strange errors in the maillog on my 
two mail dovecot nodes:

PRIMARY:
Jul  2 01:21:42 mx01.example.com dovecot: doveadm: Error: 
read(mx02.example.com) failed: read(size=3148) failed: Connection reset by peer 
(last sent=mail, last recv=mail (EOL))


The secondary is more interesting:

SECONDARY
Jul  2 01:21:42 mx02 dovecot: doveadm: Error: close(-1[istream-seekable.c:237]) 
failed: No space left on device
Jul  2 01:21:43 mx02 dovecot: doveadm: Fatal: pool_system_realloc(268435456): 
Out of memory
Jul  2 01:21:43 mx02 dovecot: doveadm: Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(+0xa192e) [0x7f2e9be4c92e] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xa1a0e) [0x7f2e9be4ca0e] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x7f2e9bddc3d3] -> 
/usr/lib64/dovecot/libdo
Jul  2 01:21:43 mx02 dovecot: doveadm: Fatal: master: service(doveadm): child 
2876 returned error 83 (Out of memory (service doveadm { vsz_limit=256 MB }, 
you may need to increase it) - set CORE_OUTOFMEM=1 environment to get core dump)
Jul  2 01:21:51 mx02 dovecot: dsync-local(u...@example.com): Error: Raw 
backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0xa192e) [0x7fd56e17e92e] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xa1a0e) [0x7fd56e17ea0e] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x7fd56e10e3d3] -> /us
Jul  2 01:21:51 mx02 dovecot: dsync-local(u...@example.com): Fatal: master: 
service(doveadm): child 2882 returned error 83 (Out of memory (service doveadm 
{ vsz_limit=256 MB }, you may need to increase it) - set CORE_OUTOFMEM=1 
environment to get core dump)


The error messages state that disk space and/or memory is a problem, but disk 
space and memory is enough available:

mx02 [~] # df -h /srv/mail/
Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/system-mail   10G  5.7G  4.3G  58% /srv/mail

mx02 [~] # free -m
  totalusedfree  shared  buff/cache   available
Mem:   378916021088 19910971759
Swap:   471  93 378


I also tried to increase vsz_limit from 256 MB to 512 MB, which did not help.


And for the sake of completness also the connection to the doveadm port works 
well from both nodes:

mx01-prod [~] # telnet mx02 14310
Trying 172.20.19.225...
Connected to mx02.
Escape character is '^]'.
^]


mx02 [~] # telnet mx01 14310
Trying 172.20.19.251...
Connected to mx01.
Escape character is '^]'.
^]


Although mail replication seems to be working properly and mails are in sync on 
both nodes (as what I could see), I would like to find the cause of this 
messages, as this does definetely don’t look normal…

I’m grateful for any help, since I’m quite on a struggle now…

Steven


Here’s my config

# doveconf -n
# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24 (124e06aa)
# OS: Linux 3.10.0-1160.31.1.el7.x86_64 x86_64 CentOS Linux release 7.9.2009 
(Core)
# Hostname: mx01.example.com
auth_mechanisms = plain login
auth_verbose = yes
dict {
  sqlquota = mysql:/etc/dovecot/dict-sqlquota.conf.ext
}
doveadm_password =  # hidden, use -P to show it
doveadm_port = 14310
first_valid_uid = 1000
mail_plugins = quota notify replication
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date index ihave duplicate 
mime foreverypart extracttext
mbox_write_locks = fcntl
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  mail_replica = tcp:mx02.example.com
  quota = maildir:User quota
  quota_exceeded_message = Quota exceeded, please go to 
http://www.example.com/over_quota_help for instructions on how to fix this.
  quota_rule2 = INBOX.Trash:storage=+100M
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full / Mailbox ist voll
  quota_status_success = DUNNO
  quota_warning = storage=90%% quota-warning 90 %u
  quota_warning2 = -storage=90%% quota-warning below %u
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
postmaster_address = postmas...@example.com
protocols = imap pop3 lmtp sieve
replication_dsync_parameters = -d -l 30 -U
service aggregator {
  fifo_listener replication-notify-fifo {
user = vmail
  }
  unix_listener replication-notify {
user = vmail
  }
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = 

Re: failed: Cached message size smaller than expected

2021-04-23 Thread Steven Varco
Am 21.04.2021 um 19:31 schrieb Michael Grant :
> 
 We don't really fix issues with mbox files anymore, other than read 
 issues.. Our focus is enabling people to move to other formats, such as 
 maildir. I would strongly recommend you to consider using maildir instead 
 of mbox.
> 
> Ugh, so many people still have their mail in mbox that I find it hard to 
> believe this is "deprecated".

I don’t think that „so many people still have their mail in mbox“, at least not 
in conjucntion with IMAP.
mbox may be good for some server generated messages from cronjobs, etc., but 
for nothing more.


> You'll get a lot of pushback if you do this!  I'm not the only one
> using it.  mailx and the gnu mail tools use it and I don't know if I
> can so easily migrate things to maildir format. I think it could be a
> nightmare migrating to maildir.

mailx is able to use other mail backends like Maildir to.
Just point the evironment variable MAIL to your Maildir instead of mbox in 
~/.bash_profile (or system globally in /etc/profile) and you’re done:
MAIL=/srv/mail/localhost//Maildir/

I’m doing it this way to ready mails directly on the server, works perfect.

For the rest I agree with the other posters, that mbox should really no be used 
in conjucntion with IMAP.
You would also not use sqlite for hosting a dynamic website.

Steven

-- 
https://steven.varco.ch/ 
https://www.tech-island.com/ 

Re: dovecot director and keepalived

2021-03-18 Thread Steven Varco
Hi Sebastiaan

Many thanks for that hint, it does work just perfectly, after I configured an 
address on the service director inet_listener! :)
Now each instance only binds to its own IP, which is exactly what I was looking 
for.

I think it would be helpfull, if this could be added as a note to the docs in 
the director configuration settings.

And to answer the questsions from others regarding my mentioned „haproxy" on 
the server:
haproxy is not directly in connection with doevecot at all, instead I use it 
for proxying the SMTP submission port to postifix until I can upgrade dovecot 
to 2.3 (which I believe should habe the smtp sumbmission service included). And 
it is also used for proxing to other services, like http for webmail, etc.

Regarding the dovecot setup, I just use keepalived -> dovecot director -> 
dovecot IMAP

Sorry for the confusion, might should have mentioned that in the beginning… ;)

thanks,
Steven

-- 
https://steven.varco.ch/ 


> Am 16.03.2021 um 13:39 schrieb Sebastiaan Hoogeveen 
> :
> 
> Hi Steven,
> 
> On 14/03/2021, 17:53, "dovecot on behalf of Steven Varco" 
>  wrote:
> 
>> Now I’m hitting the issue with the way director determines his „Self IP“ by 
>> trying to 
>> bind to all configured director_servers IPs, taking the first one possible.
>> 
>> However this approach only works, when the sysctl setting is: 
>> net.ipv4.ip_nonlocal_bind=0
>> On the other side keepalived needs net.ipv4.ip_nonlocal_bind=1 in order to 
>> bind the VIP.
> 
> This can be fixed by specifying the IP address of the director in the 
> inet_listener section of the director service, like this:
> 
>  service director {
>### other configuration here ###
>inet_listener {
>  address = 172.20.1.4
>  port = 9090
>} 
>  }
> 
> The listener address will be used as the 'self IP' of the director. This also 
> means that each director will have a slightly different configuration file, 
> but that should usually not be a problem. 
> 
> I got this from skimming the source, afaict it is not documented anywhere so 
> I'm not sure if this behavior can always be relied on in future releases (it 
> does seem logical to me though).
> 
> Kind regards,
> 
> --  
> Sebastiaan Hoogeveen
> 
> NederHost
> KvK: 34099781
> 
> 



Re: Mass Stripping Attachments by Directory, Age, Size

2021-03-18 Thread Steven Varco
I would like such a feature too, but instead of deleting the atatchment files, 
I would like to „detach“ the files and save them into a sperate directory, 
which could be on a different storage like a share in the users home directory 
or even S3 and then replace the attachment in the Mail with a LINK to that file.
Thunderbird does this quite well with its „Detach Attachment“ feature; the MIME 
part looks like this after that:


Content-Type: image/png;
name="funny-picture.png"
Content-Disposition: attachment; filename="funny-picture.png"
X-Mozilla-External-Attachment-URL: 
file:/fileserver/home/svarco/mail/attachments/funny-picture.png
X-Mozilla-Altered: AttachmentDetached; date="Thu Mar 18 09:44:37 2021"

You deleted an attachment from this message. The original MIME headers for the 
attachment were:
Content-Transfer-Encoding: base64
Content-Disposition: inline;
filename=funny-picture.png
Content-Type: image/png;
name="funny-picture.png"


I know that for MS Exchange / Outlook some external archiving solutions as 
components do exist and looking for something similar to offload attachments 
with dovecot. :)

Steven

-- 
https://steven.varco.ch/ 

> Am 18.03.2021 um 08:31 schrieb Plutocrat :
> 
> Hi,
> 
> I've been looking around for a solution to this problem. I want to prune down 
> the attachments on a server before a migration. Some of the emails are 7 
> years old and have 40Mb attachments, so this seems like a good opportunity to 
> rationalize things. So perhaps I'd like to "Remove all attachments from 
> emails older than 2 years, in the .Sent directory", or "Attachments over 10Mb 
> anywhere in the mail tree"
> 
> I've found the strip_attachments.pl script here 
>  which 
> works fine on mbox (as tested on my local Thunderbird mboxes), but not on 
> maildir which is on the dovecot server. My Perl isn't strong enough to 
> re-purpose it.
> 
> I've looked at ripmime and mpack/munpack, and although they seem like useful 
> tools to do the job of deconstructing the mail into its constituent parts, it 
> doesn't seem to help in re-building the email. I think they could be used 
> with a bit of study into mail MIME structure, and used with a helper script.
> 
> So before I take a deep dive into scripting my own solution, I just wanted to 
> check if anyone else on the list has been through this and has some resources 
> or pointers they can share, or maybe even someone to tell me "Duh, you can do 
> it with doveadm of course".
> 
> P.



Re: dovecot director and keepalived

2021-03-15 Thread Steven Varco
Hi John

Thanks for you input.

So you basically state that („physically“) separating the director servers from 
keepalive/haproxy servers is the only option?
I would like to avoid setting up two additional machines for that whenever 
possible, as any node more in the chain potentially is another point of 
failure… ;)

I’m curious to hear of any others how they did their dovecot IMAP HA setup, 
maybe raising som new ideas. :)

BTW: Why was never such a simple thing added to the direcotors code to .i.ex. 
just specifiy which is the IP of a director server itsels?
Example with a new configuriony option „my_director_sever“:

both directors:
-
director_servers: 192.168.1.10 192.168.1.20 
-

on director-2:
-
my_director_sever: 192.168.1.20 
-

cheers,
Steven

-- 
https://steven.varco.ch/ 


> Am 14.03.2021 um 20:14 schrieb Paterakis E. Ioannis :
> 
> On 14/3/2021 6:52 μ.μ., Steven Varco wrote:
> 
>> Hi All
>> 
>> I’m trying to establish a dovecot HA setup with two loadbalancers, running 
>> keepalived for sharing a virtual public IP.
>> On the same machines I’m running a dovecot director which proxies the 
>> requests to two underlying mail servers (on seperate machines).
>> 
>> Now I’m hitting the issue with the way director determines his „Self IP“ by 
>> trying to bind to all configured director_servers IPs, taking the first one 
>> possible.
> 
> Each Director has to listen only on the static IP address of each machine. 
> Then you have to configure the 2 directors in the HAproxies. The floating ip 
> with keepalived will work along with the 2 HAproxies.
> 
>> However this approach only works, when the sysctl setting is: 
>> net.ipv4.ip_nonlocal_bind=0
>> On the other side keepalived needs net.ipv4.ip_nonlocal_bind=1 in order to 
>> bind the VIP.
> 
> You don't have to mess with these settings.
> 
>> Other possible solutions I could think about:
>> - Configure each director as „independent“ by setting only one IP in 
>> director_servers.
>>   => With this aporach you would loose the user to mailserver mapping, 
>> although only in a a case of a failover on the loadbalancer, which might can 
>> be neglected (or are there any other fallbacks?)
> 
> The two directors have a connection to each other, so both know at the same 
> time where's a user mapped. You don't have to worry about that. The 
> user->dovecot mapping will work without any problems even if there is a 
> failover.
> 
>> - Putting director on seperated intermediate machines and proxing the 
>> requests through haproxy on the keepalived servers (keepalived -> haproxy -> 
>> director -> IMAP
>>=> Besides the disadvantage of having another bunch of servers in the 
>> chain, also some special configuration on the directory servers might be 
>> neccessary to assure director works neatly with haproxy.
> 
> The identical scenario will be to have keepalived along with haproxy on same 
> machine, and directors on another. But can work with all three on the same as 
> well. I use the keepalived, haproxy on two machines, with 2 directors 
> underneath each one on different machine/hardware for the high availability's 
> sake, and below them there are 3 dovecot servers.
> 
>> So 2021, what is the „correct“ (best practive) way of having a reduntant HA 
>> setup for dovecot?
> 
> Cheers :-)
> 
> John
> 
> 



dovecot director and keepalived

2021-03-14 Thread Steven Varco
Hi All

I’m trying to establish a dovecot HA setup with two loadbalancers, running 
keepalived for sharing a virtual public IP.
On the same machines I’m running a dovecot director which proxies the requests 
to two underlying mail servers (on seperate machines).

Now I’m hitting the issue with the way director determines his „Self IP“ by 
trying to bind to all configured director_servers IPs, taking the first one 
possible.

However this approach only works, when the sysctl setting is: 
net.ipv4.ip_nonlocal_bind=0
On the other side keepalived needs net.ipv4.ip_nonlocal_bind=1 in order to bind 
the VIP.

The last topic on that is dating back to 2016 
(https://dovecot.org/pipermail/dovecot/2016-August/105191.html) with references 
to 2012 (https://www.dovecot.org/list/dovecot/2012-November/087033.html) and no 
solution posted so far.

After five more years :D, I’m asking myself if we finally have a solution for 
that, or if my approach of achieving clustered director servers is potentially 
wrong?

Other possible solutions I could think about:
- Configure each director as „independent“ by setting only one IP in 
director_servers.
  => With this aporach you would loose the user to mailserver mapping, although 
only in a a case of a failover on the loadbalancer, which might can be 
neglected (or are there any other fallbacks?)

- Only have director running on the currently active loadblancer node and 
stopped on the passive loadblancer node (would possibly have the same effects 
as above).

- Putting director on seperated intermediate machines and proxing the requests 
through haproxy on the keepalived servers (keepalived -> haproxy -> director -> 
IMAP
   => Besides the disadvantage of having another bunch of servers in the chain, 
also some special configuration on the directory servers might be neccessary to 
assure director works neatly with haproxy.


So 2021, what is the „correct“ (best practive) way of having a reduntant HA 
setup for dovecot?

This means a MUA connects to one public IP and gets connected to (preferably 
the same) IMAP Server, no matter which machine in the whole chain might be down?
PS: Using just multiple A records on the mail domain name (round-robin), while 
working perfectly for SMTP is not accepatbl for IMAP IMHO, as in case of a 
failure every second request from the client (MUA) would fail and most MUAs are 
not automatially reconnecting again in that case.

thanks,
Steven

-- 
https://steven.varco.ch/ 



Re: Monitoring Dovecot Replication

2021-02-12 Thread Steven Varco
Hi Andrea

It would be great if oyu could post that here, as I (and possibly others) would 
also be interested. :)

thanks,
Steven

-- 
https://steven.varco.ch/ 

> Am 12.02.2021 um 15:12 schrieb Andrea Gabellini 
> :
> 
> Hello,
> 
> I wrote a little script for Check_MK (MRPE mode).
> 
> If you want to try it I can send it to you.
> 
> Andrea
> 
> Il 12/02/21 11:47, MK ha scritto:
>> Hello,
>> 
>> I have a cluster with two dovecot nodes with dovecot replication between 
>> them. 
>> The setup works fine and now I'm searching for a way to monitor the users so 
>> that I can get an information if the replication fails for one user for a 
>> longer time and I have to trigger the replication manually. Most of the time 
>> if I see a replication failure the self healing of dovecot replication 
>> repairs this in max. 10 min. 
>> 
>> I have tried different combinations of querying " doveadm replicator status 
>> '*' " and search for failed users and then send an alarm if one of fast 
>> sync, full sync or success sync reaches a threshold. But there is no 
>> combination that seems to be working if I only want to trigger this if I 
>> have to fix the replication manualy. 
>> 
>> Can someone tell me what I have to query to get only the user who's 
>> replication failed for a longer time (10 min +) and that I have to fix 
>> manually?
>> 
>> Thank you.
>> 
>> Oliver
> 
> -- 
> __
> Ama e fa cio' che vuoi
> __
> 
> TIM San Marino S.p.A.
> Andrea Gabellini
> Engineering R
> TIM San Marino S.p.A. - https://www.telecomitalia.sm
> Via Ventotto Luglio, 212 - Piano -2
> 47893 - Borgo Maggiore - Republic of San Marino
> Tel: (+378) 0549 886237
> Fax: (+378) 0549 886188
> 
> 
> --
> Informativa Privacy
> 
> Questa email ha per destinatari dei contatti presenti negli archivi di TIM 
> San Marino S.p.A.. Tutte le informazioni vengono trattate e tutelate nel 
> rispetto della normativa vigente sulla protezione dei dati personali (Reg. EU 
> 2016/679). Per richiedere informazioni e/o variazioni e/o la cancellazione 
> dei vostri dati presenti nei nostri archivi potete inviare una email a 
> priv...@telecomitalia.sm.
> 
> Avviso di Riservatezza
> 
> Il contenuto di questa e-mail e degli eventuali allegati e' strettamente 
> confidenziale e destinato alla/e persona/e a cui e' indirizzato. Se avete 
> ricevuto per errore questa e-mail, vi preghiamo di segnalarcelo 
> immediatamente e di cancellarla dal vostro computer. E' fatto divieto di 
> copiare e divulgare il contenuto di questa e-mail. Ogni utilizzo abusivo 
> delle informazioni qui contenute da parte di persone terze o comunque non 
> indicate nella presente e-mail potra' essere perseguito ai sensi di legge.



Re: Where is dovemon

2021-01-17 Thread Steven Varco
Poolmon already exists as a repalcement for this matter.

However, it should be made more clearly in the documentation, that „Dovemon“ is 
not part of the community package and therefore not available in the free 
version, IMHO.

-- 
https://steven.varco.ch/ 

> Am 17.01.2021 um 15:37 schrieb Scott Q. :
> 
> You can write something similar and I think others did already if you Google 
> for it. Shouldn't take you more than 1-2h to write something in Perl:
> connect to backend, no login, remove from director
> 
> 
> On Saturday, 16/01/2021 at 22:37 Steven Varco wrote:
> Hi Christian
> 
> This confused me as well.
> I later found out, that „Dovemon“ is an exlcusive part of the dovecot „pro" 
> Package (the paid docecot).
> This is why you cannot find it.
> 
> best regards,
> Steven
> 
> -- 
> https://steven.varco.ch/ 
> 
> > Am 13.01.2021 um 12:30 schrieb li...@mlserv.org:
> > 
> > Hello,
> > 
> > I found this link in the documentation:
> > 
> > https://doc.dovecot.org/configuration_manual/dovemon/
> > 
> > But where can I find the program "dovemon"? I searched all over whithout 
> > luck. In the source code, Google, nothing. It seems as only the web site 
> > would exist.
> > 
> > Can somebody help me please
> > 
> > Christian Rößner
> > -- 
> > Rößner-Network-Solutions
> > Zertifizierter ITSiBe / CISO
> > Karl-Bröger-Str. 10, 36304 Alsfeld
> > Fax: +49 6631 78823409, Mobil: +49 171 9905345
> > USt-IdNr.: DE225643613, https://roessner.website
> > PGP fingerprint: 658D 1342 B762 F484 2DDF 1E88 38A5 4346 D727 94E5 
> >



Re: Where is dovemon

2021-01-16 Thread Steven Varco
Hi Christian

This confused me as well.
I later found out, that „Dovemon“ is an exlcusive part of the dovecot „pro" 
Package (the paid docecot).
This is why you cannot find it.

best regards,
Steven

-- 
https://steven.varco.ch/ 

> Am 13.01.2021 um 12:30 schrieb li...@mlserv.org:
> 
> Hello,
> 
> I found this link in the documentation:
> 
> https://doc.dovecot.org/configuration_manual/dovemon/
> 
> But where can I find the program "dovemon"? I searched all over whithout 
> luck. In the source code, Google, nothing. It seems as only the web site 
> would exist.
> 
> Can somebody help me please
> 
> Christian Rößner
> -- 
> Rößner-Network-Solutions
> Zertifizierter ITSiBe / CISO
> Karl-Bröger-Str. 10, 36304 Alsfeld
> Fax: +49 6631 78823409, Mobil: +49 171 9905345
> USt-IdNr.: DE225643613, https://roessner.website
> PGP fingerprint: 658D 1342 B762 F484 2DDF 1E88 38A5 4346 D727 94E5 
> 



Combine director and HAProxy for loadbalancig and failover

2020-12-05 Thread Steven Varco
Hi All

I’m trying to achive an active/active cluster with dovecot-director and 
HAProxy, as the director does not do health checks (loadblancing only) and I 
want both, loadblancing AND failover, where the latter is far more important to 
me (loadblancing I would just use as an addon and for curiosity, it is not 
really need in my setup).

However, I’m not sure if this combination can be used that way, as I found 
almost no documentation on this.

What I found is using either directory OR haproxy with dovecot, but not both.
So I guess, that the description here is intended to use without the director: 
https://wiki2.dovecot.org/HAProxy 

This older list post is basically exaclty, what I’m trying to achive: 
https://dovecot.org/pipermail/dovecot/2015-July/101487.html 
Clients --> Load Balancer(HAProxy) --> Dovecot Proxy(DP) --> Dovecot 
Director(DD) --> MS1 / MS2

As far as I understood, this would require to (statically) set a host= entry 
for each client which would give another single point of failure?

However reading the documentation: 
https://doc.dovecot.org/configuration_manual/haproxy/ assumes that this setup 
IS actually possible.
Unfortunately it does not describe how dovecot must be setup at this point.

So I assume since HAProxy is listening on port 1143 (for IMAP) and dovecot 
(with directror enabled) is listening on port 143, which is the main entry 
point for clients. So in this setup dovecot-director should first pass the mail 
traffic to HAProxy on port 1143 and then HAProxy passes it to the actual imap 
servers?

So how would the dovecot setup on the director servers look like and how on the 
dovecot imap mail servers?

My setup consist of each two machines: loadblancer (with HAProxy and 
dovecot-director) and dovecot imap servers.

Last but not least, I found documentation for „dovemon“, which, if I got that 
correctly, should replace the external tool „poolmon“: 
https://doc.dovecot.org/configuration_manual/dovemon/ - However, I could not 
find out where the configuration YAML file (/etc/dovecot/dovemon.config.yml) 
should get included in the main dovecot configuration. By default this file 
would not be included at all and therefore would have no effect…

thanks.
Steven



My setup:

LOADBALANCERS:
Currently one only running on: 10.0.2.26

haproxy: Exactly as: https://doc.dovecot.org/configuration_manual/haproxy/ 
where the backend servers line have been replaced with the mail servers.


dovecot -n
# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24 (124e06aa)
# OS: Linux 3.10.0-1127.19.1.el7.x86_64 x86_64 CentOS Linux release 7.8.2003 
(Core)
# Hostname: lb01.example.com
auth_verbose = yes
director_mail_servers = 10.0.2.30 10.0.2.29
director_servers = 10.0.2.26
disable_plaintext_auth = no
first_valid_uid = 1000
haproxy_trusted_networks = 10.0.2.0/24
mail_location = maildir:~/Maildir
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date index ihave duplicate 
mime foreverypart extracttext
mbox_write_locks = fcntl
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  args = proxy=y ssl=any-cert nopassword=y
  driver = static
}
plugin {
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = imap pop3 lmtp sieve
service director {
  fifo_listener login/proxy-notify {
mode = 0666
  }
  inet_listener {
port = 9090
  }
  unix_listener director-userdb {
mode = 0600
  }
  unix_listener login/director {
mode = 0666
  }
}
service imap-login {
  executable = imap-login director
  inet_listener imap {
port = 143
  }
  inet_listener imap_haproxy {
haproxy = yes
port = 10143
  }
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
}
service pop3-login {
  executable = pop3-login director
}
ssl_cert = http://www.tech-island.xyz/over_quota_help for instructions on how to fix this.
  quota_rule2 = INBOX.Trash:storage=+100M
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full / Mailbox ist voll
  quota_status_success = DUNNO
  quota_warning = storage=90%% quota-warning 90 %u
  quota_warning2 = -storage=90%% quota-warning below %u
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
postmaster_address = postmas...@tech-island.xyz
protocols = imap pop3 lmtp sieve
replication_dsync_parameters = -d -l 30 -U
service aggregator {
  fifo_listener replication-notify-fifo {
user = vmail
  }
  unix_listener replication-notify {
user = vmail
  }
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user