Re: Extract body preview in lua notify script

2023-10-20 Thread Teemu Huovila via dovecot
Hello
On 16.10.2023 19.19, Filip Hanes via dovecot wrote:
 Hi,
 I am trying to extract short text preview from mail body in function
 dovecot_lua_notify_end_txn.
 I want to send notification about new email with perex included.
 How can it be done? Is it possible?
 thanks
There is a preview available in the "snippet" of e.g.
dovecot_lua_notify_event_message_new(). Please see https://doc.dovecot.org/
configuration_manual/push_notification/#lua-lua and https://doc.dovecot.org/
admin_manual/imap_preview/ . The whole message body is not available via the
Lua API, but if you absolutely need it, you could call doveadm inside your
script to fetch it. This will make the push notification very execution heavy
though and introduce delays in your LMTP delivery.
br,
Teemu
 -- Filip Hanes
 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot 2.4 issue with openssl >= 3.0.8

2023-08-17 Thread Teemu Huovila via dovecot


On 16.8.2023 1.09, Michal Hlavinka wrote:

Hi,

I've started testing main git branch and found an issue with openssl 3.

Test suite fails:
test_load_v1_key ... : ok
test-crypto.c:445: Assert failed: ret == TRUE
test-crypto.c:446: Assert failed: error == NULL
Panic: file dcrypt-openssl3.c: line 2917 (dcrypt_openssl_public_key_type): 
assertion failed: (key != NULL && key->key != NULL)
Error: Raw backtrace: test-crypto(+0x5af01) [0x55951b667f01] -> 
test-crypto(backtrace_get+0x2f) [0x55951b66877f] -> test-crypto(+0x31291) 
[0x55951b63e291] -> test-crypto(+0x312d5) [0x55951b63e2d5] -> test-crypto(+0x16025) 
[0x55951b623025] -> libdcrypt_openssl.so(+0x70ff) [0x7f5a995b60ff] -> 
test-crypto(+0x2a17b) [0x55951b63717b] -> test-crypto(+0x3055d) [0x55951b63d55d] -> 
test-crypto(test_run+0x6c) [0x55951b63d61c] -> test-crypto(main+0x4e) 
[0x55951b62b66e] -> libc.so.6(+0x27b4a) [0x7f5a995eeb4a] -> 
libc.so.6(__libc_start_main+0x8b) [0x7f5a995eec0b] -> test-crypto(_start+0x25) 
[0x55951b62b7e5]


The issue is caused by change in openssl >= 3.0.8

in lib-dcryt/dcrypt-openssl3.c method ec_key_get_pub_point_hex(...)

in old version (for openssl 1.x) it uses compressed format:

EC_POINT_point2hex(g, p, POINT_CONVERSION_COMPRESSED, NULL);

in openssl3 version it uses

EVP_PKEY_get_octet_string_param(pkey, OSSL_PKEY_PARAM_PUB_KEY, buf, sizeof(buf), 
);


but key is created in dcrypt_evp_pkey_from_point(...)
with OSSL_PKEY_PARAM_EC_POINT_CONVERSION_FORMAT, "uncompressed"

this was working in openssl pre 3.0.8 as compression format was ignored when 
dumping data and used always compressed format.


From https://www.openssl.org/docs/man3.0/man7/EVP_KEYMGMT-EC.html

""" Before OpenSSL 3.0.8, the implementation of providers included with OpenSSL 
always opted for an encoding in compressed format, unconditionally. Since OpenSSL 
3.0.8, the implementation has been changed to honor the 
OSSL_PKEY_PARAM_EC_POINT_CONVERSION_FORMAT parameter, if set, or to default to 
uncompressed format."""


I don't see simple way how to force compressed format just for dumping data without 
effecting the key itself. Method that can be used with compression format argument 
is EC_POINT_point2hex but it requires quite lengthy extraction of EC_POINT first. 
I've tried this for testing with:


static const char *ec_key_get_pub_point_hex(const EVP_PKEY *pkey)
{
if (!EVP_PKEY_is_a(pkey, "EC"))
    return NULL;
size_t len;

EVP_PKEY_get_utf8_string_param(pkey, OSSL_PKEY_PARAM_GROUP_NAME, NULL, 0, 
);
buffer_t *parambuf = t_buffer_create(len+1);
EVP_PKEY_get_utf8_string_param(pkey, OSSL_PKEY_PARAM_GROUP_NAME, (char 
*)parambuf->data, len+1, );

int nid = OBJ_txt2nid(parambuf->data);

EVP_PKEY_get_octet_string_param(pkey, OSSL_PKEY_PARAM_PUB_KEY, NULL, 0, 
);
parambuf = t_buffer_create(len+1);
EVP_PKEY_get_octet_string_param(pkey, OSSL_PKEY_PARAM_PUB_KEY, (unsigned char 
*)parambuf->data, len+1, );


EC_GROUP *ec_grp = EC_GROUP_new_by_curve_name(nid);
if (ec_grp == NULL)
    return NULL;
EC_POINT *point = EC_POINT_new(ec_grp);
if (point == NULL) {
    EC_GROUP_free(ec_grp);
    return NULL;
}

if (!EC_POINT_oct2point(ec_grp, point, parambuf->data, len, NULL))
    return NULL;
char *ret = EC_POINT_point2hex(ec_grp, point, POINT_CONVERSION_COMPRESSED, 
NULL);
EC_GROUP_free(ec_grp);
EC_POINT_free(point);
return ret;
}

and it fixed the issue, but there may be an easier way how to achieve that.

Thank you for the report! I have created a ticket and we are tracking this internally 
as issue DOV-6155.


br,
Teemu



Cheers
Michal Hlavinka


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Finding Users Over Quota

2022-03-17 Thread Teemu Huovila

Hello

On 16.3.2022 20.44, dove...@ptld.com wrote:

Using doveadm quota will let you see if a specific user is over quota.
Is there a way to have dovecot tell you which users are currently over quota?

Is the quota_clone plugin a way to save user quota to dict like how last_login 
saves to dict?


The documentation for quota_clone is MIA. (As of 3/16/22)
https://wiki2.dovecot.org/Plugins
  v
https://wiki2.dovecot.org/Plugins/QuotaClone
  v
https://doc.dovecot.org/plugin-settings/quota-clone-plugin/
  v
404 Not Found


The links, from the old documentation to the new, unfortunately broke when 
restructuring the content for settings. You can find documentation using the search 
function. E.g. quota clone is at 
https://doc.dovecot.org/settings/plugin/quota-clone-plugin .


Teemu




Re: XOAUTH2 submission - "500 5.5.2 Line too long"

2022-03-08 Thread Teemu Huovila



On 8.3.2022 18.33, Patrick Nagel wrote:

Hi,

On Tuesday, 8 March 2022 16:12:48 CET Rosario Esposito wrote:

I setup a dovecot submission server, using xoauth2 authentication.
My roundcube webmail points to dovecot submission.
In roundcube smtp logs, I see:


[...]

[08-Mar-2022 15:47:16 +0100]:  Send: AUTH XOAUTH2
dXNlcj1yZXNwb3NpdAFhdXRoPWJlYXJlci<<<...*very long base64 string*>>>
[08-Mar-2022 15:47:16 +0100]:  Recv: 500 5.5.2 Line too long

Apparently, the problem came out after upgrading from dovecot 2.3.17 to
2.3.18

Interesting, the changelog 
(https://github.com/dovecot/core/releases/tag/2.3.18) says:

- submission-login: Authentication does not accept OAUTH2 token (or
   other very long credentials) because it considers the line to be too long.

But sorry, not much else I can contribute, I'm just a random passerby.
Unfortunately there were some issues with the initial fix to this. There were fixups 
after the release of 2.3.18 to community. 
https://github.com/dovecot/core/commit/667e206a017a48cf623f95f9837e79760be6309b (and 
commits right before)


Re: cumulative resource limit exceeded

2021-12-31 Thread Teemu Huovila



On 30.12.2021 21.20, Aki Tuomi wrote:

On 29/12/2021 22:09 Mats Mellstrand  wrote:

  
Hi


I’m running dovecot-2.3.15   and  dovecot-pigeonhole-0.5.15 on FreeBSD 13

To day i suddenly get errors in my personal logfile (.dovecot.sieve.log)

sieve: info: started log at Dec 29 20:26:42.
.dovecot: error: cumulative resource usage limit exceeded.

Is there anything I can do about this error?

/mm

This is a feature added in pigeonhole to avoid executing computationally too 
expensive sieve scripts. Prior 2.3.17 this might get triggered by IMAPsieve 
unintentionally.

You can also disable this feature with

plugin {
   sieve_max_cpu_time = 0
}

We are working on documenting the applicable settings at https://doc.dovecot.org/ . 
Meanwhile you can reference 
https://github.com/dovecot/pigeonhole/blob/master/INSTALL#L322 for an explanation.


Teemu




Re: Support for MULTISEARCH

2020-05-08 Thread Teemu Huovila


On 6.5.2020 3.57, Daniel Miller wrote:
Does Dovecot presently support the MULTISEARCH command, or are there 
plans to do so?
If you mean RFC7377, that is not supported.  ref. 
https://www.imapwiki.org/Specs



I would suggest evaluating if searching a single virtual folder could 
work for your use case. ref. 
https://doc.dovecot.org/configuration_manual/virtual_plugin/


br,
Teemu


---
Daniel


Re: Ask for little change :)

2018-10-12 Thread Teemu Huovila



On 11.10.2018 14:53, Kamil Jońca wrote:
> 
> Is it possible, that dovecot-lmtp, has in inserted "Received:" header
> something about its version ie.
> instead:
> --8<---cut here---start->8---
> Received: from alfa.kjonca by alfa.kjonca with LMTP id
>n1O7D5Q3v1toSQAApvcrCQ (envelope-from )
>for ; Thu, 11 Oct 2018 13:44:20 +0200
> --8<---cut here---end--->8---
> 
> would be:
> --8<---cut here---start->8---
> Received: from alfa.kjonca (Dovecot version) by alfa.kjonca with LMTP
>id n1O7D5Q3v1toSQAApvcrCQ (envelope-from )
>for ; Thu, 11 Oct 2018 13:44:20 +0200
> --8<---cut here---end--->8---
Hello

Even mentioning Dovecot in the Received header was intentionally removed in 
v2.2.31 with the following comment:
---
v2.2.31 2017-06-26  Timo Sirainen 

* LMTP: Removed "(Dovecot)" from added Received headers. Some
  installations want to hide it, and there's not really any good reason
  for anyone to have it.
---

In case you have a good reason for this request, we are eager to hear it and we 
will consider it.
br,
Teemu


Re: Status of SMTPUTF8?

2018-09-14 Thread Teemu Huovila



On 06.09.2018 22:25, kada...@gmail.com wrote:
> I necro-bump this thread as I have the same problem since I switched to LMTP 
> from LDA (as the wiki recommend).
> Any news to make dovecot LMTP postix compliant ?
> 
> 
> Le 08/11/2016 à 17:13, Noah Tilton a écrit :
>>
>> I was wondering whether there is a roadmap for adding SMTPUTF8 support to 
>> Dovecot?
>>
>> My delivery pattern is Postfix -> Dovecot LMTP and it is choking on utf8 
>> messages.
>>
>> I might be able to volunteer some of my time as a developer.
>>
>> Another thread about this seemed to go unanswered:
>> http://dovecot.org/list/dovecot/2016-September/105474.html
>>
>> http://unix.stackexchange.com/questions/320091/configure-postfix-and-dovecot-lmtp-to-receive-mail-via-smtputf8
>>
>> -Noah
> 
This is being considered and we have some pretty goof plans already, but it is 
not in the short term planning yet.

br,
Teemu


Re: dovecot fts hangs on search

2018-05-04 Thread Teemu Huovila
Hello

Could you plese
1. send the full output of doveconf -n
2. Check the SOLR logs for any errors
3. Describe your dovecot architecture, ie. if  you are running a single backend 
or a more complex configuration.
4. Provide a backtrace of the core dump using the instructions on 
https://dovecot.org/bugreport.html

br,
Teemu

On 04.05.2018 01:13, André Rodier wrote:
> On 02/05/18 22:17, André Rodier wrote:
>> On 02/05/18 11:45, André Rodier wrote:
>>> On 2018-05-01 21:29, André Rodier wrote:
 On 2018-05-01 07:22, André Rodier wrote:
> Hello,
>
> I am trying to use Doevecot fts, with solr the script provided.
>
> To rebuild the index, I use the command:
> doveadm -D index -u mirina 'inbox'
>
> To rescan, I use: doveadm -D fts rescan -u mirina
>
> But when I do a search, with doveadm, the program hangs:
>
> doveadm -D search -u mirina text Madagascar
>
>> Debug: Loading modules from directory: /usr/lib/dovecot/modules
>> Debug: Module loaded: /usr/lib/dovecot/modules/lib10_quota_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/modules/lib20_fts_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/modules/lib21_fts_solr_plugin.so
>> Debug: Loading modules from directory: /usr/lib/dovecot/modules/doveadm
>> Debug: Skipping module doveadm_acl_plugin, because dlopen() failed: 
>> /usr/lib/dovecot/modules/doveadm/lib10_doveadm_acl_plugin.so: undefined 
>> symbol: acl_lookup_dict_iterate_visible_next (this is usually 
>> intentional, so just ignore this message)
>> Debug: Skipping module doveadm_expire_plugin, because dlopen() failed: 
>> /usr/lib/dovecot/modules/doveadm/lib10_doveadm_expire_plugin.so: 
>> undefined symbol: expire_set_deinit (this is usually intentional, so 
>> just ignore this message)
>> Debug: Module loaded: 
>> /usr/lib/dovecot/modules/doveadm/lib10_doveadm_quota_plugin.so
>> Debug: Module loaded: 
>> /usr/lib/dovecot/modules/doveadm/lib10_doveadm_sieve_plugin.so
>> Debug: Skipping module doveadm_fts_lucene_plugin, because dlopen() 
>> failed: 
>> /usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_lucene_plugin.so: 
>> undefined symbol: lucene_index_iter_deinit (this is usually intentional, 
>> so just ignore this m$
>> ssage)
>> Debug: Module loaded: 
>> /usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_plugin.so
>> Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen() 
>> failed: 
>> /usr/lib/dovecot/modules/doveadm/libdoveadm_mail_crypt_plugin.so: 
>> undefined symbol: mail_crypt_box_get_pvt_digests (this is usually 
>> intentional, so just ignore thi$ message)
>> doveadm(mirina): Debug: auth USER input: mirina home=/home/users/mirina 
>> uid=1002 gid=1001 mail=mirina@homebox.space
>> doveadm(mirina): Debug: Added userdb setting: mail=mirina@homebox.space 
>> doveadm(mirina): Debug: Effective uid=1002, gid=1001, 
>> home=/home/users/mirina
>> doveadm(mirina): Debug: Quota root: name=User quota backend=maildir args=
>> doveadm(mirina): Debug: Quota rule: root=User quota mailbox=* 
>> bytes=2147483648 messages=0
>> doveadm(mirina): Debug: Quota grace: root=User quota bytes=214748364 
>> (10%)
>> doveadm(mirina): Debug: Namespace inbox: type=private, prefix=, sep=/, 
>> inbox=yes, hidden=no, list=yes, subscriptions=yes 
>> location=maildir:~/mails/maildir:INDEX=~/mails/indexes/
>> doveadm(mirina): Debug: maildir++: 
>> root=/home/users/mirina/mails/maildir, 
>> index=/home/users/mirina/mails/indexes, indexpvt=, control=, 
>> inbox=/home/users/mirina/mails/maildir, alt=
>> doveadm(mirina): Debug: quota: quota_over_flag check: STORAGE ret=1 
>> value=134 limit=2097152
>> doveadm(mirina): Debug: quota: quota_over_flag check: MESSAGE ret=0 
>> value=3 limit=0
>> doveadm(mirina): Debug: quota: quota_over_flag=0((null)) vs currently 
>> overquota=0
>> doveadm(mirina): Debug: Namespace : Using permissions from 
>> /home/users/mirina/mails/maildir: mode=0700 gid=default
>> doveadm(mirina): Debug: http-client: host localhost: Host created
>> doveadm(mirina): Debug: http-client: host localhost: DNS lookup 
>> successful; got 2 IPs
>> doveadm(mirina): Debug: http-client: peer [::1]:8080: Peer created
>> doveadm(mirina): Debug: http-client: queue http://localhost:8080: 
>> Setting up connection to [::1]:8080 (1 requests pending)
>> doveadm(mirina): Debug: http-client: peer [::1]:8080: Linked queue 
>> http://localhost:8080 (1 queues linked)
>> doveadm(mirina): Debug: http-client: queue http://localhost:8080: 
>> Started new connection to [::1]:8080
>> doveadm(mirina): Debug: http-client: request [Req1: GET 
>> 

Re: Plugin charset_alias

2018-03-06 Thread Teemu Huovila
Hello

On 05.03.2018 23:46, MRob wrote:
> On 2018-03-02 09:57, Teemu Huovila wrote:
>> On 02.03.2018 09:38, MRob wrote:
>>> On 2018-03-01 22:59, John Woods wrote:
>>>> Hey Everyone,
>>>>
>>>>     We are getting a compile error for Dovecot 2.2.34 on Solaris 11.3
>>>> x86, using Solaris Studio 12.6 compiler, and it doesn't occur with
>>>> Dovecot 2.2.33.
>>>>
>>>>> Making all in charset-alias
>>>
>>> Can someone easily explain what the usage of this plugin is? Maybe example 
>>> when it is helpful?
>> There is a short explanation at 
>> https://wiki2.dovecot.org/Plugins/CharsetAlias
>>
>> It is intended for mapping charactersets to work around some e.g.
>> Windows specific letters being lost when Dovecot converts mail to UTF8
>> using iconv.
> 
> I read that page so I wanted more real life example so I can learn should I 
> install this plugin or is it for special use scenario cuz I'm not charset 
> expert sorry. Your explain adds little bit more info but not enough to know 
> do I need it or not. BTW not your fault-- I didn't ask myquestion good enough
> 
> If Dovecot has trouble to decode some windows charsets and the plugin fixes 
> this problem then why is it a plugin and not built in as a fix?
As mail can in practise contain almost any type of text, either correctly or 
incorrectly encoded, taking every possible error condition into account in 
"built in" core code is not feasible.

This plugin can be used to work around some issues, but it is not useful for 
everybody. For your specific case, it is difficult to judge without knowing 
your userbase languages and mail clients in depth. I would say however, that if 
you do not know of any issues with mail content encoding, you probably should 
not enable this plugin.

br,
Teemu


Re: Plugin charset_alias

2018-03-02 Thread Teemu Huovila


On 02.03.2018 09:38, MRob wrote:
> On 2018-03-01 22:59, John Woods wrote:
>> Hey Everyone,
>>
>>     We are getting a compile error for Dovecot 2.2.34 on Solaris 11.3
>> x86, using Solaris Studio 12.6 compiler, and it doesn't occur with
>> Dovecot 2.2.33.
>>
>>> Making all in charset-alias
> 
> Can someone easily explain what the usage of this plugin is? Maybe example 
> when it is helpful?
There is a short explanation at https://wiki2.dovecot.org/Plugins/CharsetAlias

It is intended for mapping charactersets to work around some e.g. Windows 
specific letters being lost when Dovecot converts mail to UTF8 using iconv.

br,
Teemu


Re: XDOVECOT capability?

2017-11-28 Thread Teemu Huovila


On 28.11.2017 20:04, Hogne Vevle wrote:
> Hi!
> 
> Here and there, I'm seeing mentions of a "XDOVECOT" capability - e.g. on 
> https://documentation.open-xchange.com/7.8.2/middleware/components/search/crossfolder_fts_in_mail.html
>  .
> 
> However, I can't seem to find any documentation on what this actually does. 
> 
> We need to add this capability to our servers in order for certain 
> functionality of Open Xchange App Suite to work properly (as seen in the link 
> above), but we don't want to blindly update our entire Dovecot cluster just 
> because their docs tell us to :) 
> 
> Can someone, please, shed some light on what other effects we can expect to 
> see after enabling this capability, if any? Or is it simply a way of telling 
> clients that "Hey, I'm a Dovecot server" - nothing else?
This last statement is correct. It is only used to assure the client that the 
functionality is there. The various features are enabled by their respective 
settings.

br,
Teemu

> 
> Cheers,
> 
> - Hogne
> 


Re: dovecot-2.3 (-git) Warning and Fatal Compile Error

2017-10-30 Thread Teemu Huovila


On 30.10.2017 09:10, Aki Tuomi wrote:
> 
> 
> On 30.10.2017 00:23, Reuben Farrelly wrote:
>> Hi Aki,
>>
>> On 30/10/2017 12:43 AM, Aki Tuomi wrote:
 On October 29, 2017 at 1:55 PM Reuben Farrelly
  wrote:


 Hi again,

 Chasing down one last problem which seems to have been missed from my
 last email:

 On 20/10/2017 9:22 PM, Stephan Bosch wrote:
>
> Op 20-10-2017 om 4:23 schreef Reuben Farrelly:
>> On 18/10/2017 11:40 PM, Timo Sirainen wrote:
>>> On 18 Oct 2017, at 6.34, Reuben Farrelly 
>>> wrote:
 This problem below is still present in 2.3 -git, as of version
 2.3.devel
 (6fc40674e)

>>> Secondly, this ssl_dh messages is always printed from doveconf:
>>>
>>> doveconf: Warning: please set ssl_dh=>> doveconf: Warning: You can generate it with: dd
>>> if=/var/lib/dovecot/ssl-parameters.dat bs=1 skip=88 | openssl dh
>>> -inform der > /etc/dovecot/dh.pem
>>>
>>> Yet the file is there:
>>>
>>> thunderstorm conf.d # ls -la /etc/dovecot/dh.pem
>>> -rw-r--r-- 1 root root 769 Oct 19 21:55 /etc/dovecot/dh.pem
>>>
>>> And the config is there as well:
>>>
>>> thunderstorm dovecot # doveconf -P | grep ssl_dh
>>> ssl_dh = >> doveconf: Warning: please set ssl_dh=>> doveconf: Warning: You can generate it with: dd
>>> if=/var/lib/dovecot/ssl-parameters.dat bs=1 skip=88 | openssl dh
>>> -inform der > /etc/dovecot/dh.pem
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>>    ssl_dh = -BEGIN DH PARAMETERS-
>>> thunderstorm dovecot #
>>>
>>> It appears that this warning is being triggered by the presence of
>>> the ssl-parameters.dat file because when I remove it the warning
>>> goes away. Perhaps the warning could be made a bit more specific
>>> about this file being removed if it is not required because at the
>>> moment the warning message is not related to the trigger.
>>>
>>> Thanks,
>>> Reuben
 Thanks,
 Reuben
>>> It is triggered when there is ssl-parameters.dat file *AND* there is
>>> no ssl_dh=< explicitly set in config file.
>>>
>>> Aki
>>
>> I have this already in my 10-ssl.conf file:
>>
>> lightning dovecot # /etc/init.d/dovecot reload
>> doveconf: Warning: please set ssl_dh=> doveconf: Warning: You can generate it with: dd
>> if=/var/lib/dovecot/ssl-parameters.dat bs=1 skip=88 | openssl dh
>> -inform der > /etc/dovecot/dh.pem
>>  * Reloading dovecot configs and restarting auth/login processes
>> ...  [ ok ]
>> lightning dovecot #
>>
>> However:
>>
>> lightning dovecot # grep ssl_dh conf.d/10-ssl.conf
>> # gives on startup when ssl_dh is unset.
>> ssl_dh=> lightning dovecot #
>>
>> and the file is there:
>>
>> lightning dovecot # ls -la /etc/dovecot/dh.pem
>> -rw-r--r-- 1 root root 769 Oct 19 19:06 /etc/dovecot/dh.pem
>> lightning dovecot #
>>
>> So it is actually configured and yet the warning still is present.
>>
>> Reuben
> 
> Hi!
> 
> I gave this a try, and I was not able to repeat this issue. Perhaps you
> are still missing ssl_dh somewhere?
> 
> Aki
> 
Hello

Just a guess, but at this point I would recommend reviewing the output of 
"doveconf -n" to make sure the appropriate settings are present.

br,
Teemu


Re: weakforced

2017-08-17 Thread Teemu Huovila
Below is an answer by the current weakforced main developer. It overlaps partly 
with Samis answer.

---snip---
 > Do you have any hints/tips/guidelines for things like sizing, both in a
> per-server sense (memory, mostly) and in a cluster-sense (logins per sec ::
> node ratio)? I'm curious too how large is quite large. Not looking for
> details but just a ballpark figure. My largest install would have about 4
> million mailboxes to handle, which I'm guessing falls well below 'quite
> large'. Looking at stats, our peak would be around 2000 logins/sec.
>

So in terms of overall requests per second, on a 4 CPU server, latencies start 
to rise pretty quickly once you get to around 18K requests per second. Now, 
bearing in mind that each login from Dovecot could generate 2 allow and 1 
report requests, this leads to roughly 6K logins per second on a 4 CPU server.

In terms of memory usage, the more the better obviously, but it depends on your 
policy and how many time windows you have. Most of our customers have 24GB+.

> I'm also curious if -- assuming they're well north of 2000 logins/sec --
> the replication protocol begins to overwhelm the daemon at very high
> concurrency.
>
Eventually it will, but in tests it consumes a pretty tiny fraction of the 
overall CPU load compared to requests so it must be a pretty high limit. Also, 
if you don’t update the time windows DB in the allow function, then that 
doesn’t cause any replication. We’ve tested with three servers, each handling 
around 5-6000 logins/sec (i.e. 15-18K requests each) and the overall query rate 
was maintained.

> Any rules of thumb on things like "For each additional 1000 logins/sec, add
> another # to setNumSiblingThreads and another # to setNumWorkerThreads"
> would be super appreciated too.
>

Actually the rule of thumb is more like:

- WorkerThreads - Set to number of CPUs. Set number of LuaContexts to 
WorkerThreads + 2
- SiblingThreads - Leave at 2 unless you see issues.

> Thanks! And again, feel free to point me elsewhere if there's a better
> place to ask. 
Free free to ask questions using the weakforced issues on GitHub.

> For a young project, the docs are actually quite good.

Thanks, that’s appreciated - we try to keep them up to date and comprehensive. 

Neil


Re: Auth Policy Server/wforce/weakforced

2017-08-08 Thread Teemu Huovila


On 04.08.2017 23:10, Daniel Miller wrote:
> On 8/4/2017 12:48 PM, Daniel Miller wrote:
>> On 8/3/2017 6:11 AM, Teemu Huovila wrote:
>>>
>>> On 02.08.2017 23:35, Daniel Miller wrote:
>>>> Is there explicit documentation available for the (probably trivial) 
>>>> configuration needed for Dovecot and Wforce?  I'm probably missing 
>>>> something that should be perfectly obvious...
>>>>
>>>> Wforce appears to start without errors.  I added a file to dovecot's 
>>>> conf.d:
>>>>
>>>> 95-policy.conf:
>>>> auth_policy_server_url = http://localhost:8084/
>>>> auth_policy_hash_nonce = this_is_my_super_secret_something
>>>>
>>>> Looking at the Wforce console I see:
>>>>
>>>> WforceWebserver: HTTP Request "/" from 127.0.0.1:45108: Web Authentication 
>>>> failed
>>>>
>>>> In wforce.conf I have the (default):
>>>>
>>>> webserver("0.0.0.0:8084", "--WEBPWD")
>>>>
>>>> Do I need to change the "--WEBPWD"?  Do I need to specify something in the 
>>>> Dovecot config?
>>> You could try putting an actual password, in plain text, where --WEBPWD is. 
>>> Then add that base64 encoded to dovecot setting 
>>> auth_policy_server_api_header.
>>>
>> I knew it would be something like that.  I've made some changes but I'm 
>> still not there.  I presently have:
>>
>> webserver("0.0.0.0:8084", "--WEBPWD ultra-secret-secure-safe")
>> in wforce.conf (and I've tried with and without the --WEBPWD)
>>
>> and
>>
>> auth_policy_server_api_header = Authorization: Basic 
>> dWx0cmEtc2VjcmV0LXNlY3VyZS1zYWZl
>> in 95-policy.conf for dovecot
>>
>> Obviously I'm still formatting something wrong.
>>
> I think I've got something working a little better.  I'm using:
> webserver("0.0.0.0:8084", "ultra-secret-secure-safe")
> (so I remove the --WEBPWD - that's a placeholder, not a argument declaration)
> 
> and for dovecot, the base64 encoding needs to be "wforce:password" instead of 
> just the password.
> 
> Now I have to see what else needs to be tweaked.
> 
> Daniel
Glad you got it working. Lua comments, prefixed with "--" can indeed be a bit 
misleading. My sloppy answer omitting HTTP Basic auth hash contents did not 
help either.

br,
Teemu


Re: Virtual mailboxes, index update issues

2017-08-08 Thread Teemu Huovila


On 07.08.2017 20:28, Stefan Hagen wrote:
> Hello,
> 
> I noticed a strange behavor, where I would like to ask for help.
> 
> I have set up a few virtual mailboxes using the Virtual plugin.
> These mailboxes are:
> - Unread (all undread in all mailboxes)
> - LastDay (last 24h of all mailboxes)
> - LastWeek (last 7 days of all mailboxes)
> ...
> 
> The virtual mailboxes in general are working great. However, there is one 
> annoying behavior I would like to fix.
> 
> If I define my virtual mailboxes like this:
> 
> namespace inbox {
>  inbox = yes
>  prefix = "Virtual/"
>  separator = /
>  location = "virtual:~/.emails_virtual:LAYOUT=fs"
>  list = yes
>  subscriptions = yes
> }
> 
> All programs can work with this mailbox. However, the index are not kept in 
> sync. Let's say I mark some emails as "read" in the "Unread" mailbox, then 
> they will still be in "unread"-state in the LastDay and LastWeek mailboxes.
> 
> This is annoying. And there is a fix for it...
> 
>  location = "virtual:~/.emails_virtual:LAYOUT=fs:INDEX=MEMORY"
> 
> Once I set INDEX=MEMORY, the mailbox refresh happens instantly. Marking an 
> email as "read" in the "Unread" mailbox will instantly set it as "read" in 
> the other mailboxes as well.
> 
> However, many mail programs (mainly clients on OSX and iOS) have trouble and 
> freak out with a refresh loop once I open these mailboxes. One client 
> specifically complained about UID validity issues, which leads me to believe 
> that UIDs are generated on the fly and are not stable when INDEX is set to 
> MEMORY?
> 
> How can I keep my virtual mailboxes in sync? Shouldn't they stay in sync even 
> when INDEX=MEMORY is *not* set? I tried to set INDEX to a directory - it 
> didn't help.
> 
> Best Regards,
> Stefan
> 
> Files:
> - 15-mailboxes.conf: https://gist.github.com/e54d458ece16ad6f29b536fa840e99ec
> - dovecot version: 2.2.31 (65cde28)
> - OS: FreeBSD 11.0-RELEASE-p11
> 
There are a few known issues in the virtual plugin for Dovecot 2.2.31. Some of 
these we will have fixed in 2.2.32. Your issue seems like a case of 
https://github.com/dovecot/core/commit/bc7d7e41fe00f76c38d1a5194c130c983487911b

br,
Teemu


Re: [master-2.2] 4118e86

2017-08-08 Thread Teemu Huovila


On 08.08.2017 10:17, Armin Tüting wrote:
> quota-status.c: In function ‘client_handle_request’:
> quota-status.c:98:10: warning: passing argument 4 of
> ‘message_detail_address_parse’ from incompatible pointer type [enabled
> by default]
>   );
>   ^
> In file included from quota-status.c:14:0:
> ../../../src/lib-mail/message-address.h:38:6: note: expected ‘const
> char **’ but argument is of type ‘char *’
>  void message_detail_address_parse(const char *delimiter_string,
>   ^
> quota-status.c:98:10: error: too many arguments to function
> ‘message_detail_address_parse’
>   );
>   ^
> In file included from quota-status.c:14:0:
> ../../../src/lib-mail/message-address.h:38:6: note: declared here
>  void message_detail_address_parse(const char *delimiter_string,
> 
Thank you for noticing. We had a slight mishap while picking commits. Please 
try now.

br,
Teemu


Re: Auth Policy Server/wforce/weakforced

2017-08-03 Thread Teemu Huovila


On 02.08.2017 23:35, Daniel Miller wrote:
> Is there explicit documentation available for the (probably trivial) 
> configuration needed for Dovecot and Wforce?  I'm probably missing something 
> that should be perfectly obvious...
> 
> Wforce appears to start without errors.  I added a file to dovecot's conf.d:
> 
> 95-policy.conf:
> auth_policy_server_url = http://localhost:8084/
> auth_policy_hash_nonce = this_is_my_super_secret_something
> 
> Looking at the Wforce console I see:
> 
> WforceWebserver: HTTP Request "/" from 127.0.0.1:45108: Web Authentication 
> failed
> 
> In wforce.conf I have the (default):
> 
> webserver("0.0.0.0:8084", "--WEBPWD")
> 
> Do I need to change the "--WEBPWD"?  Do I need to specify something in the 
> Dovecot config? 
You could try putting an actual password, in plain text, where --WEBPWD is. 
Then add that base64 encoded to dovecot setting auth_policy_server_api_header.

hope this helps,
Teemu


Re: Need Help to analyze the error or is it a bug?

2017-06-16 Thread Teemu Huovila


On 15.06.2017 01:45, Dipl.-Ing. Harald E. Langner wrote:
> After done an update to dovecot-2.2.30.2
> 
> my connection is broken since days.
> 
> all what I try every time the same error:
> 
> Jun 15 00:02:18 auth: Error: auth: environment corrupt; missing value for 
> DOVECOT_
> Jun 15 00:02:18 auth: Fatal: unsetenv(RESTRICT_SETUID) failed: Bad address
> Jun 15 00:02:18 master: Error: service(auth): command startup failed, 
> throttling for 2 secs
Could you post the output of "doveconf -n". Please also describe from which 
version you upgraded to v2.2.30.2 and how you did the upgrade? Are you 
compiling Dovecot yourself? What are the configuration & compilation options 
etc.

br,
Teemu

> 
> 
> I try this:
> 
> # doveadm -Dv auth test -x service=imap theusername mypassword
> 
> output:
> 
> Debug: Loading modules from directory: /usr/local/lib/dovecot
> Debug: Module loaded: /usr/local/lib/dovecot/lib20_virtual_plugin.so
> Debug: Loading modules from directory: /usr/local/lib/dovecot/doveadm
> Debug: Skipping module doveadm_acl_plugin, because dlopen() failed: 
> /usr/local/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so: Undefined symbol 
> "acl_user_module" (this is usually intentional, so just ignore this message)
> Debug: Skipping module doveadm_expire_plugin, because dlopen() failed: 
> /usr/local/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so: Undefined 
> symbol "expire_set_lookup" (this is usually intentional, so just ignore this 
> message)
> Debug: Skipping module doveadm_quota_plugin, because dlopen() failed: 
> /usr/local/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so: Undefined 
> symbol "quota_user_module" (this is usually intentional, so just ignore this 
> message)
> Debug: Skipping module doveadm_fts_plugin, because dlopen() failed: 
> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so: Undefined symbol 
> "fts_filter_filter" (this is usually intentional, so just ignore this message)
> Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen() failed: 
> /usr/local/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so: Undefined 
> symbol "mail_crypt_user_get_public_key" (this is usually intentional, so just 
> ignore this message)
> Error: Timeout waiting for handshake from auth server. my pid=63521, input 
> bytes=0
> Fatal: Couldn't connect to auth socket
> 
> 
> dlopen() failed, Undefined symbols ... Is this a bug?
> 
> How do I check what is going wrong with this auth socket error?
> 
> 
> 2nd) I'm using
> 
> dovecot.conf
>   protocols = imap lmtp
>   !include conf.d/*.conf
> 
> 10-auth.conf ,
> 
> 10-master.conf,
> 
> auth-passwdfile.conf.ext,
> 
> 10-ssl.conf
> 
> and 10-logging.conf (all logs are on)
> 
> All others, form the 29 configuration files, I haven't touch. It has worked, 
> before I have done the update. What has changed? For what in the *.conf 
> should I locking for?
> 
> 
> Thanks a lot.


Re: Retrieving mail from read-only mdbox

2017-06-05 Thread Teemu Huovila


On 01.06.2017 00:30, Mark Moseley wrote:
> This is a 'has anyone run into this and solved it' post. And yes, I've been
> reading and re-reading TFM but without luck. The background is that I'm
> working on tooling before we start a mass maildir->mdbox conversion. One of
> those tools is recovering mail from backups (easy as pie with maildir).
> 
> We've got all of our email on Netapp file servers. They have nice
> snapshotting but the snapshots are, of course, readonly.
> 
> My question: is there a doveadm command that will allow for email to be
> retrieved from a readonly mdbox, either directly (like manipulating the
> mdbox files directly) or by doveadm talking to the dovecot processes?
What would this tooling do exactly? Is it for restoring the users existing 
(writable) account from a read-only backup? I would in that case recommend 
looking into using "doveadm backup" or "doveadm sync". They do provide som 
crude selection of messages on e.g. folder level.

In case you you are aiming to provide users online access to their own backups, 
maybe add them as a read-only mailbox using ACLs. Though other mails in this 
thread indicate there still might be problems.

br,
Teemu

> 
> Ideally, there'd be something like doveadm dump, but that could dump
> selected message contents.
> 
> I've tried using IMAP with mail_location pointed at the snapshot, but,
> though I can get a listing of emails in the mailbox, the fetch fails when
> dovecot can't write-lock dovecot.index.log.
> 
> If anyone has gotten something similar to work, I'd love to hear about it.
> A working IMAP setup would be the ideal, since it's more easily automatible
> (but I'll take whatever I can get).
> 
> Any and all hints are most welcome!
> 


Re: 2nd try: Thunderbird "Empty Trash" causes inconsistent IMAP session state?

2017-06-05 Thread Teemu Huovila


On 05.06.2017 11:02, awl1 wrote:
> Resending - any ideas why I might get "IMAP session state is inconsistent" 
> whenever emtyping the trash in Thunderbird?
> 
> Thanks,
> Andreas
> 
> 
> Am 31.05.2017 um 00:02 schrieb awl1:
>> All,
>>
>> having successfully compiled and set up Dovecot 2.2.29.1 on my Thecus NAS as 
>> a newbie without any further hassle, and already imported an external mail 
>> archive of ~15 GB into a lz4-compressed mdbox (with impressive performance 
>> on the old Intel Atom CPU!), I stumbled into a minor, but reproducible issue 
>> that might well be already known, but I haven't managed to find any pointers 
>> through Google search:
>>
>> When I *manually empty the Trash folder from Thunderbird* (52.1.1, 
>> regardless of Windows or Linux client) by right-clicking on a non-empty 
>> Trash folder and then selecting "Empty trash", Dovecot produces the 
>> following log message, and I need to restart Thunderbird client in order to 
>> properly refresh its folder status:
>>
>> May 30 23:42:04 imap(x...@xxx.org): Info: *IMAP session state is 
>> inconsistent, please relogin.* in=405 out=3114
>>
>> Is this behaviour and message intended, or did I indeed run into a bug here?
>>
>> Many thanks & best regards
>> Andreas
Hello

Can you please provide the output of doveconf -n? The version of your lz4 
library could be useful also, if it is not the distributions default. ref. 
https://dovecot.org/bugreport.html

br,
Teemu


Re: user-defined special-use folders

2017-05-29 Thread Teemu Huovila


On 29.05.2017 12:31, Fabian Schmidt wrote:
> 
> I plan to define SPECIAL-USE mailboxes and think about defining per user 
> special-use folders for those who don't use the default folder names. Is this 
> possible in dovecot?
> 
> What I try:
> $ doveadm exec imap
> * PREAUTH [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE 
> IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT 
> MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS 
> LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN 
> CONTEXT=SEARCH LIST-STATUS BINARY MOVE SEARCH=FUZZY METADATA SPECIAL-USE] 
> Logged in as fschmidt
> a1 SETMETADATA "sent-mail" (/private/specialuse "\\Sent")
> a1 NO [CANNOT] The /private/specialuse attribute cannot be changed (0.008 + 
> 0.000 + 0.008 secs).
Dovecot does not currently support setting SPECIAL-USE metadata. There are some 
plans to change this, but no firm timeline.

br,
Teemu

> 
> using dovecot 2.2.29.1, following the example in RFC 6154, sect. 5.4
> 
> (How is MAIL_ATTRIBUTE_INTERNAL_RANK_AUTHORITY defined or set?)
> 
> Fabian.


Re: lmtp segfault after upgrade

2017-05-18 Thread Teemu Huovila


On 18.05.2017 11:56, Tom Sommer wrote:
> 
> 
> ---
> Tom
> 
> On 2017-05-18 10:05, Teemu Huovila wrote:
>> On 18.05.2017 10:55, Tom Sommer wrote:
>>> On 2017-05-18 09:36, Teemu Huovila wrote:
>>>> Hello Tom
>>>>
>>>> On 02.05.2017 11:19, Timo Sirainen wrote:
>>>>> On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:
>>>>>>
>>>>>> (gdb) bt full
>>>>>> #0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
>>>>>>_stream = 
>>>>>> #1  0x7fe98391ff32 in i_stream_concat_read_next (stream=0x1efe6c0) 
>>>>>> at istream-concat.c:77
>>>>>>prev_input = 0x1ef1560
>>>>>>data = 0x0
>>>>>>data_size = 
>>>>>>size = 
>>>>>> #2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175
>>>>>
>>>>> This isn't very obvious.. There hasn't been any changes to istream-concat 
>>>>> code in 2.2.29 and I can't really think of any other changes either that 
>>>>> could be causing these crashes. Do these crashes happen to all mail 
>>>>> deliveries or only some (any idea of percentage)? Maybe only for 
>>>>> deliveries that have multiple recipients (in different backends)? We'll 
>>>>> try to reproduce, but I'd think someone else would have already 
>>>>> noticed/complained if it was that badly broken..
>>>>>
>>>>> What's your doveconf -n? Also can you try running via valgrind to see 
>>>>> what it logs before the crash? :
>>>>>
>>>>> service lmtp {
>>>>>   executable = /usr/bin/valgrind --vgdb=no -q /usr/libexec/dovecot/lmtp # 
>>>>> or whatever the lmtp path really is
>>>>> }
>>>>>
>>>> As this is not easily reproducible with a common lmtp proxying
>>>> configuration, we would be interested in the doveconf -n output from
>>>> all involved nodes (proxy, director, backend).
>>>>
>>>> Did you have a chance to try the valgrind wrapper adviced by Timo?
>>>
>>> Timo already fixed this? I think?
>>>
>>> https://github.com/dovecot/core/commit/167dbb662c2ddedeb7b34383c18bdcf0537c0c84
>> The commit in question fixes an asser failure. The issue you reported
>> is an invalid memory access. The commit was not intended to fix your
>> report. Has the crash stopped happening in your environment?
> 
> Well, I downgraded to 2.2.26 and haven't looked at 2.2.29 since. So I guess 
> it's not fixed.
> 
> I'll give it another look with valgrind.
Please, also send the doveconf -n output for proxy, director and backend.

br,
Teemu

> 
> // Tom


Re: lmtp segfault after upgrade

2017-05-18 Thread Teemu Huovila


On 18.05.2017 10:55, Tom Sommer wrote:
> On 2017-05-18 09:36, Teemu Huovila wrote:
>> Hello Tom
>>
>> On 02.05.2017 11:19, Timo Sirainen wrote:
>>> On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:
>>>>
>>>> (gdb) bt full
>>>> #0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
>>>>_stream = 
>>>> #1  0x7fe98391ff32 in i_stream_concat_read_next (stream=0x1efe6c0) at 
>>>> istream-concat.c:77
>>>>prev_input = 0x1ef1560
>>>>data = 0x0
>>>>data_size = 
>>>>size = 
>>>> #2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175
>>>
>>> This isn't very obvious.. There hasn't been any changes to istream-concat 
>>> code in 2.2.29 and I can't really think of any other changes either that 
>>> could be causing these crashes. Do these crashes happen to all mail 
>>> deliveries or only some (any idea of percentage)? Maybe only for deliveries 
>>> that have multiple recipients (in different backends)? We'll try to 
>>> reproduce, but I'd think someone else would have already noticed/complained 
>>> if it was that badly broken..
>>>
>>> What's your doveconf -n? Also can you try running via valgrind to see what 
>>> it logs before the crash? :
>>>
>>> service lmtp {
>>>   executable = /usr/bin/valgrind --vgdb=no -q /usr/libexec/dovecot/lmtp # 
>>> or whatever the lmtp path really is
>>> }
>>>
>> As this is not easily reproducible with a common lmtp proxying
>> configuration, we would be interested in the doveconf -n output from
>> all involved nodes (proxy, director, backend).
>>
>> Did you have a chance to try the valgrind wrapper adviced by Timo?
> 
> Timo already fixed this? I think?
> 
> https://github.com/dovecot/core/commit/167dbb662c2ddedeb7b34383c18bdcf0537c0c84
The commit in question fixes an asser failure. The issue you reported is an 
invalid memory access. The commit was not intended to fix your report. Has the 
crash stopped happening in your environment?

br,
Teemu

> 
> ---
> Tom


Re: lmtp segfault after upgrade

2017-05-18 Thread Teemu Huovila
Hello Tom

On 02.05.2017 11:19, Timo Sirainen wrote:
> On 2 May 2017, at 10.41, Tom Sommer  wrote:
>>
>> (gdb) bt full
>> #0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
>>_stream = 
>> #1  0x7fe98391ff32 in i_stream_concat_read_next (stream=0x1efe6c0) at 
>> istream-concat.c:77
>>prev_input = 0x1ef1560
>>data = 0x0
>>data_size = 
>>size = 
>> #2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175
> 
> This isn't very obvious.. There hasn't been any changes to istream-concat 
> code in 2.2.29 and I can't really think of any other changes either that 
> could be causing these crashes. Do these crashes happen to all mail 
> deliveries or only some (any idea of percentage)? Maybe only for deliveries 
> that have multiple recipients (in different backends)? We'll try to 
> reproduce, but I'd think someone else would have already noticed/complained 
> if it was that badly broken..
> 
> What's your doveconf -n? Also can you try running via valgrind to see what it 
> logs before the crash? :
> 
> service lmtp {
>   executable = /usr/bin/valgrind --vgdb=no -q /usr/libexec/dovecot/lmtp # or 
> whatever the lmtp path really is
> }
> 
As this is not easily reproducible with a common lmtp proxying configuration, 
we would be interested in the doveconf -n output from all involved nodes 
(proxy, director, backend).

Did you have a chance to try the valgrind wrapper adviced by Timo?

br,
Teemu


Re: Adding secure POP3?

2017-04-13 Thread Teemu Huovila


On 13.04.2017 15:33, @lbutlr wrote:
> On 2017-04-13 (05:27 MDT), Aki Tuomi  wrote:
>>
>> 4) you can use autoexpunge here, i guess.
> 
> Are messages marked in anyway once they’ve been fetched with pop3 (like 
> marked read?). If so, I could auto-archive them.
Yes, they are marked read. See "Flag changes" on 
https://wiki2.dovecot.org/POP3Server

br,
Teemu
> 
> (I don’t mind storing old mail as much as I mind storing it in the inbox)
> 


Re: Meaning of "protocol !indexer-worker"

2017-04-03 Thread Teemu Huovila


On 17.03.2017 13:11, Angel L. Mateo wrote:
> Hello,
> 
> I'm configuring dovecot 2.2.28. Comparing with previous versions I have 
> found now in 10-mail.conf the config:
> 
> protocol !indexer-worker {
>   # If folder vsize calculation requires opening more than this many mails 
> from
>   # disk (i.e. mail sizes aren't in cache already), return failure and finish
>   # the calculation via indexer process. Disabled by default. This setting 
> must
>   # be 0 for indexer-worker processes.
>   #mail_vsize_bg_after_count = 0
> }
> 
> I can see that indexer-worker is the index service in dovecot.
> 
> But I don't what the '!' means in front of the service name. Any who can 
> explain it to me?
> 
The '!' means the limit is enabled for all other services except 
indexer-worker. Inder worker is the one you need to complete the background 
job, so limiting it mightnot be wise.

br,
Teemu


Re: Mailbox size in log file

2017-03-02 Thread Teemu Huovila


On 02.03.2017 18:21, Sergio Bastian Rodríguez wrote:
> Hello Dovecot list.
> 
> I need that Dovecot log writes mailbox size in all POP / IMAP connections, 
> but I do not know if Dovecot can do that.
> I have been searching about that with not successful.
> 
> For example, this is the log of our last email platform, different than 
> Dovecot:
> 
> 06:48:14 025BEE83 POP3 LOGIN user 'x...@xxx.com' MailboxSize = 61708 Capacity 
> = 2%
> ..
> 06:49:19 025BEE83 POP3 LOGOUT user 'x...@xxx.com' MailboxSize = 14472 
> Capacity = 0%
> 
> In this example we can know the mailbox size before and after the connection, 
> and it shows that user has removed or downloaded all messages from server.
We have a feature very similar to this on our roadmap. I expect there will be 
time to compelte it in the latter half of 2017.

Teemu
> 
> Now in Dovecot we have no information about that, and I cannot find any 
> plugin which gives this us functionality.
> 
> Is it possible to have this feature in Dovecot?
> Thanks for your help.
> 
> 
> 
> 
> 
> 
> Le informamos, como destinatario de este mensaje, que el correo electrónico y 
> las comunicaciones por medio de Internet no permiten asegurar ni garantizar 
> la confidencialidad de los mensajes transmitidos, así como tampoco su 
> integridad o su correcta recepción, por lo que TELECABLE DE ASTURIAS, S.A. no 
> asume responsabilidad alguna por tales circunstancias.
> Si no consintiese en la utilización del correo electrónico o de las 
> comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro 
> conocimiento de manera inmediata. Este mensaje va dirigido, de manera 
> exclusiva, a su destinatario y contiene información confidencial y sujeta al 
> secreto profesional, cuya divulgación no está permitida por la ley. En caso 
> de haber recibido este mensaje por error, le rogamos que, de forma inmediata, 
> nos lo comunique mediante correo electrónico remitido a nuestra atención o a 
> través del teléfono (+ 34) 984191000 y proceda a su eliminación, así como a 
> la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la 
> distribución, copia o utilización de este mensaje, o de cualquier documento 
> adjunto al mismo, cualquiera que fuera su finalidad, están prohibidas por la 
> ley.
> 
> 


Re: post-delivery virus scan

2016-11-10 Thread Teemu Huovila


On 09.11.2016 23:36, Brad Koehn wrote:
> I have discovered that many times the virus definitions I use for scanning 
> messages (ClamAV, with the unofficial signatures 
> http://sanesecurity.com/usage/linux-scripts/) are updated some time after my 
> server has received an infected email. It seems the virus creators are trying 
> to race the virus definition creators to see who can deliver first; more than 
> half of the infected messages are found after they’ve been delivered. Great. 
> 
> To help detect and remove the infected messages after they’ve been delivered 
> to users’ mailboxes, I created a small script that iterates the INBOX and 
> Junk mailbox directories, scans recent messages for viruses, and deletes them 
> if found. The source of my script (run via cron) is here: 
> https://gitlab.koehn.com/snippets/9
> 
> Unfortunately Dovecot doesn’t like it if messages are deleted (dbox) out from 
> under it. I tried a doveadm force-resync on the folder containing the 
> messages, but it seems Dovecot is still unhappy. At least on the new version 
> (2.2.26.0) it doesn’t crash; 2.2.25 would panic and coredump when it 
> discovered messages had been deleted. 
> 
> I’m wondering if there’s a better way to scan recent messages and eradicate 
> them so the Dovecot isn’t upset when it happens. Maybe using doveadm search? 
> Looking for suggestions. 
The removal should if possible be done with the doveadm cli tool or using the 
doveadm http api.

br,
Teemu Huovila
> 
> 
> 
> 
> ---
> Brad 
> 


Re: Increased errors "Broken MIME parts" in log file

2016-06-10 Thread Teemu Huovila


On 10.06.2016 10:09, Urban Loesch wrote:
> Hi,
> 
> same here on my installation. Version: Enterprise Edition: 2:2.2.24.1-2
Any chance to get some example input triggering this. Perhaps using one of the 
obfuscation scripts in http://dovecot.org/tools/

br,
Teemu Huovila

> 
> Some logs:
> 
> ...
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: Corrupted index cache file 
> /home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: 
> Broken MIME parts for mail UID 11678 in mailbox INBOX: Cached MIME parts 
> don't match message during parsing: Cached header size mismatch 
> (parts=41009f0dd90dd52f023102004800e40d5c005e004800580e5c005f00a52ec52f2001)
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: Corrupted index cache file 
> /home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: 
> Broken MIME parts for mail UID 11694 in mailbox INBOX: Cached MIME parts 
> don't match message during parsing: Cached header size mismatch 
> (parts=4100bb0df50de4f2a9f80200480e5c005e004800740e5c005f00b4f16cf7b805)
> 
> Got also this errors:
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: 
> unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache)
>  failed: No such file or directory (in mail-cache.c:28)
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: Corrupted index cache file 
> /home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: 
> Broken MIME parts for mail UID 11742 in mailbox INBOX: Cached MIME parts 
> don't match message during parsing: Cached header size mismatch (parts=)
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: 
> unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache)
>  failed: No such file or directory (in mail-cache.c:28)
> Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: Corrupted index cache file 
> /home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: 
> Broken MIME parts for mail UID 11752 in mailbox INBOX: Cached MIME parts 
> don't match message during parsing: Cached header size mismatch (parts=)
> Jun  5 07:40:02 dovecot-server dovecot: imap(u...@domain.com pid:27937 
> session:): Error: 
> unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache)
>  failed: No such file or directory (in mail-cache.c:28)
> 
> ...
> 
> Thanks
> Urban
> 
> 
> Am 09.06.2016 um 16:06 schrieb Dave:
>> On 02/06/2016 22:58, Timo Sirainen wrote:
>>> On 01 Jun 2016, at 16:48, Alessio Cecchi <ales...@skye.it> wrote:
>>>>
>>>> Hi,
>>>>
>>>> after the last upgrade to Dovecot 2.2.24.2 (d066a24) I see an increased 
>>>> number of errors "Broken MIME parts" for users in dovecot log file, here an
>>>> example:
>>>>
>>>> Jun 01 15:25:29 Error: imap(alessio.cec...@skye.it): Corrupted index cache 
>>>> file /home/domains/skye.it/alessio.cecchi/Maildir/dovecot.index.cache:
>>>> Broken MIME parts for mail UID 34 in mailbox INBOX: Cached MIME parts 
>>>> don't match message during parsing: Cached header size mismatch
>>>> (parts=41005b077b07fc0a400b030048008007600064002000210001004000260827002900ea00f00044005d091e002000b308e0082d00010041007b09b208de08)
>>>>
>>> ..
>>>
>>>> but the error reappears always (for the same UID) when I do "search" from 
>>>> webmail. All works fine for the users but I don't think is good to have
>>>> these errors in log file.
>>>
>>> If it's reproducible for a specific email, can you send me the email?
>>
>> I'm replying to this again for a couple of reasons:
>>
>> 1. I've not heard any further discussion and I accidentally replied 

Re: push-notification plugin and imap-metadata permissions

2016-04-25 Thread Teemu Huovila


On 23.04.2016 10:01, giova...@giovannisfois.net wrote:
> 
> 
> On 2016-04-22 09:07 PM, Timo Sirainen wrote:
>> On 22 Apr 2016, at 15:17, Giovanni S. Fois <giova...@giovannisfois.net> 
>> wrote:
>>> In order to tell if a mailbox is enabled to send out the notification, the 
>>> plugin
>>> looks out for the following mailbox metadata key:
>>> /private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify
>>>
>>> If the key is set then everything is OK and the notification is sent, 
>>> otherwise the
>>> action is skipped.
>>>
>>> If I try to setup the metadata key by hand (telnet as the user over the 
>>> imap port):
>>> setmetadata INBOX 
>>> (/private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify
>>>  "user=myu...@mydomain.com")
>>>
>>> I get the error message: "Internal mailbox attributes cannot be accessed"
>> Server metadata is set with:
>>
>> a SETMETADATA "" (/private/vendor/vendor.dovecot/http-notify 
>> "user=myu...@mydomain.com")
>>
>> Which should internally map into the INBOX's 
>> /private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify.
>>
> 
> I am sorry, but this is not working.
> As you suggested I have launched the imap commands:
> 
> a setmetadata ""  (/private/vendor/vendor.dovecot/http-notify 
> "user=myu...@mydomain.com")
> b getmetadata "" "/private/vendor/vendor.dovecot/http-notify"
> c getmetadata "INBOX" 
> "/private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify"
> 
> the 'b' command returns a sort of json with the correct result
> the 'c' command returns NIL
After command "a", I think you should have gotten the push notification. The 
getmetadata imap command is not supposed to be able to access the path with 
/pvr/server/ in it. It is used for internal mapping only. Since command "b" 
returned the data, I would say the metadata is correctly set.

br,
Teemu Huovila

> 
> By the way,  hardcoding the key as 
> "/private/vendor/vendor.dovecot/http-notify" and recompiling the plugin
> has the effect to bring the system on the expected course.
> 
> Thank you again for your time and kind support.
> 
> Have a good weekend,
> Giovanni


Re: push-notification plugin and imap-metadata permissions

2016-04-22 Thread Teemu Huovila


On 22.04.2016 15:17, Giovanni S. Fois wrote:
> Ultra short version:
> 
> Why cant I set the following mailbox metadata key?
> /private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify
Why do you want to set it there? Have you tried setting it on some mailbox path?

> 
> Let me explain the context:
> 
> I'm using the Dovecot version 2.23.1, but the same happens for the 2.2.22
> 
> The push-notification plugin is supposed to send out a notification whenever
> a mailbox get a new email message.
> 
> In order to tell if a mailbox is enabled to send out the notification, the 
> plugin
> looks out for the following mailbox metadata key:
> /private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify
> 
> If the key is set then everything is OK and the notification is sent, 
> otherwise the
> action is skipped.
> 
> If I try to setup the metadata key by hand (telnet as the user over the imap 
> port):
> setmetadata INBOX 
> (/private/vendor/vendor.dovecot/pvt/server/vendor/vendor.dovecot/http-notify 
> "user=myu...@mydomain.com")
> 
> I get the error message: "Internal mailbox attributes cannot be accessed"
> 
> Digging in the Dovecot 2.2.22 sources i found that:
> 
> This string is from lib-imap-storage/imap-metadata.c - line 36 - Dovecot 
> 2.2.22
> The message is triggered by the following condition - same file - line 125  - 
> Dovecot 2.2.22
> 
> if (strncmp(*key_r, MAILBOX_ATTRIBUTE_PREFIX_DOVECOT_PVT,
> strlen(MAILBOX_ATTRIBUTE_PREFIX_DOVECOT_PVT)) == 0) {
> 
> So the path pvt/server appears to be forbidden.
> 
> But, in the file lib-storage/mailbox-attribute.h we can read the following 
> comment:
> 
> /* User can get/set all non-pvt/ attributes and also pvt/server/
>(but not pvt/server/pvt/) attributes. */
> 
> And, after said comment there is the definition of the macro 
> MAILBOX_ATTRIBUTE_KEY_IS_USER_ACCESSIBLE(key)
> which has the same basic function of the condition in imap-metadata.c , but 
> in this case
> the same imap key is seen as accessible.
> 
> Now my questions:
> 
> Can we use a negated version of MAILBOX_ATTRIBUTE_KEY_IS_USER_ACCESSIBLE(key) 
> in imap-metadata?
> How can the push-notification plugin work out-of-the-box without changes and 
> recompilation?
> 
> Thank you for your valuable time and forgive me if I'm posing a dumb question.
Please see instructions at 
http://oxpedia.org/wiki/index.php?title=AppSuite:OX_Mail#Setup_of_the_Dovecot_Push
In case the problem is not resolved, do attach your doveconf -n output to the 
next mail.

br,
Teemu

> 
> Best wishes,
> Giovanni S. Fois


Re: stats: Error: FIFO input error: CONNECT: Duplicate session ID

2016-04-18 Thread Teemu Huovila


On 18.04.2016 10:12, Urban Loesch wrote:
> Hi,
> 
> yesterday I updatet to Dovecot EE version 2:2.2.23.1-1.
> Now sometimes I see this errors in my logs:
> 
> ...
> Apr 18 09:02:19 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: 
> Duplicate session ID NjcCDoSAFFd/KQAAFMUCeg for user u...@domain1.com service 
> lmtp
> Apr 18 09:04:05 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: 
> Duplicate session ID rjV1HtCGFFcoogAAFMUCeg for user u...@domain2.com service 
> lmtp
> Apr 18 09:04:30 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: 
> Duplicate session ID Sqi0IMWAFFeRNQAAFMUCeg for user u...@domain3.com service 
> lmtp
> ...
> 
> The error only appears when a mail is sent to 2 ore more recipients 
> concurrently.
> It's not ciritcal for me, all mails are getting delivered correctly.
This is fixed in commit 
https://github.com/dovecot/core/commit/aeea3dbd1f4031634f7b318614adf51dcfc79f42

br,
Teemu Huovila
> 
> Thanks and regards
> Urban Loesch


Re: v2.2.23 released

2016-04-04 Thread Teemu Huovila


On 31.03.2016 16:18, Hauke Fath wrote:
> On 03/30/16 14:48, Timo Sirainen wrote:
>> http://dovecot.org/releases/2.2/dovecot-2.2.23.tar.gz
>> http://dovecot.org/releases/2.2/dovecot-2.2.23.tar.gz.sig
>>
>> This is a bugfix-only release with various important fixes on top of v2.2.22.
> 
> ... the build breaks on NetBSD with
The build should work with 
https://github.com/dovecot/core/commit/4adefdb40c7ffcac3d8f8279cdf52d9f72d39636.
 Please report back, if it does not. 


> 
> [...]
> libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib 
> -I../../../src/lib-test -I../../../src/lib-master -I../../../src/lib-dict 
> -I../../../src/lib-index -I../../../src/lib-mail -I../../../src/lib-storage 
> -I../../../src/lib-storage/index -I../../../src/lib-storage/index/maildir 
> -I../../../src/doveadm -std=gnu99 -O2 -Wall -W -Wmissing-prototypes 
> -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 
> -Wbad-function-cast -fno-builtin-strftime -Wstrict-aliasing=2 -MT quota-fs.lo 
> -MD -MP -MF .deps/quota-fs.Tpo -c quota-fs.c  -fPIC -DPIC -o .libs/quota-fs.o
> libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib 
> -I../../../src/lib-test -I../../../src/lib-master -I../../../src/lib-dict 
> -I../../../src/lib-index -I../../../src/lib-mail -I../../../src/lib-storage 
> -I../../../src/lib-storage/index -I../../../src/lib-storage/index/maildir 
> -I../../../src/doveadm -std=gnu99 -O2 -Wall -W -Wmissing-prototypes 
> -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 
> -Wbad-function-cast -fno-builtin-strftime -Wstrict-aliasing=2 -MT 
> rquota_xdr.lo -MD -MP -MF .deps/rquota_xdr.Tpo -c rquota_xdr.c  -fPIC -DPIC 
> -o .libs/rquota_xdr.o
> libtool: link: ar cru .libs/lib10_doveadm_quota_plugin.a  doveadm-quota.o
> libtool: link: ranlib .libs/lib10_doveadm_quota_plugin.a
> libtool: link: ( cd ".libs" && rm -f "lib10_doveadm_quota_plugin.la" && ln -s 
> "../lib10_doveadm_quota_plugin.la" "lib10_doveadm_quota_plugin.la" )
> quota-fs.c: In function 'fs_quota_get_netbsd':
> quota-fs.c:695:7: error: 'i' undeclared (first use in this function)
> quota-fs.c:695:7: note: each undeclared identifier is reported only once for 
> each function it appears in
> Makefile:726: recipe for target 'quota-fs.lo' failed
> gmake[4]: *** [quota-fs.lo] Error 1
> gmake[4]: *** Waiting for unfinished jobs
> libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib 
> -I../../../src/lib-test -I../../../src/lib-master -I../../../src/lib-dict 
> -I../../../src/lib-index -I../../../src/lib-mail -I../../../src/lib-storage 
> -I../../../src/lib-storage/index -I../../../src/lib-storage/index/maildir 
> -I../../../src/doveadm -std=gnu99 -O2 -Wall -W -Wmissing-prototypes 
> -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 
> -Wbad-function-cast -fno-builtin-strftime -Wstrict-aliasing=2 -MT 
> rquota_xdr.lo -MD -MP -MF .deps/rquota_xdr.Tpo -c rquota_xdr.c -o 
> rquota_xdr.o >/dev/null 2>&1
> mv -f .deps/quota-storage.Tpo .deps/quota-storage.Plo
> mv -f .deps/rquota_xdr.Tpo .deps/rquota_xdr.Plo
> gmake[4]: Leaving directory 
> '/var/obj/pkgsrc/mail/dovecot2/work/dovecot-2.2.23/src/plugins/quota'
> Makefile:456: recipe for target 'all-recursive' failed
> gmake[3]: *** [all-recursive] Error 1
> 
> 
> Cheerio,
> hauke
> 


Re: Dovecot 2.2.21 change imap logout format (and broke my log parsing)

2016-03-23 Thread Teemu Huovila


On 22.03.2016 17:59, Alessio Cecchi wrote:
> Hi,
> 
> after upgrade to dovecot-2.2.21 the log of "imap logout" format changed
> 
> from:
> Mar  1 03:40:44 pop01 dovecot: imap(i...@domain.com): Connection closed 
> in=111 out=1522 session=
> 
> to:
> Mar  3 03:48:11 pop01 dovecot: imap(i...@domain.com): Connection closed (IDLE 
> running for 0.001 + waiting input for 2088.878 secs, 2 B in + 10+0 B out, 
> state=wait-input) in=224 out=2834 session=<6XTzihst3uUFqB6m>
> 
> Can "(IDLE running for 0.001 + waiting input for 2088.878 secs, 2 B in + 10+0 
> B out, state=wait-input)" removed from the log?
> 
> My imap_logout_format is:
> 
> imap_logout_format = in=%i out=%o session=<%{session}>
I think this should only happen when a client disconnects without issuing the 
LOGOUT command. So it can be viewed as an error condition. The extra output was 
added, to support debugging such situations. It could be argued it is a more 
common use case to want to know the issue, than not, but it is a matter of 
opinion. Maybe a setting to disable it could be considered.

For reference, these are the commits that break your parsing:
https://github.com/dovecot/core/commit/266d72b0b32d5b105de96aac0c050d5a4c0ed3a8
https://github.com/dovecot/core/commit/fa5c3e6ebdcebde921ddbbe43219774ceaf081f0

br,
Teemu Huovila


> 
> Thanks


Re: Replication issues master <-> master nfs backend

2016-03-23 Thread Teemu Huovila


On 22.03.2016 21:30, William L. Thomson Jr. wrote:
> I keep having some replication issues and not sure what can be done to 
> resolve or correct. It 
> does not seem to happen all the time, though for the last ~30 or so minutes 
> and many 
> messages seems to be happening consistent for me.
> 
> I have 2 mail servers, basically clones, and thus master master replication. 
> Most of the time 
> things work fine. But many times an email or several will arrive on one, and 
> never replicate 
> to the other. I am not as concerned on the never replicating, as I am that 
> the user never gets 
> notified.
> 
> Mail arrives on say server 1, users are checking mail on server 2, and they 
> never see the email 
> on server 2. This is not always the case, but its happening enough daily. I 
> then log into one 
> and run sync manually. Which usually syncs the mail on both servers, and then 
> it arrives in 
> the inbox.
> 
> Here is an example, mail is on mail2, but not mail1. I am checking email on 
> mail1 so I am not 
> seeing the 1 email.
> 
> Mail1
> /home/wlt-ml/.maildir/new:
> total 0
> 
> Mail2
> /home/wlt-ml/.maildir/new:
> total 12
> -rw--- 1 wlt-ml site1 8502 Mar 22 14:57 1458673024.7643.mail2
> 
> Then I manually log into mail2 and run this command, though usually I can run 
> it from either 
> side, and just change the name to the other server.
> 
> doveadm sync -u "*" remote:mail1
> 
> And then I end up with the missing email on mail1, and it arrives in my email 
> client shortly 
> there after
> 
> Mail1
> /home/wlt-ml/.maildir/new:
> total 12
> -rw--- 1 wlt-ml site1 8502 Mar 22 14:57 
> 1458673051.M838843P26735.mail1,S=8502,W=8678:2,T
> 
> I have no idea why it does this. It seems to happen when when a full sync has 
> taken place 
> per doveadm replicator status wlt-ml. There does not seem to be any settings 
> to force a full 
> vs fast sync more often. No clue if this is even a full vs fast issue or 
> other.
> 
> I think it tends to happen more when people stay connected to the imap 
> server. I had a 
> theory that closing the email client and opening it again will get dovecot to 
> sync. I believe 
> this is still the case, but not able to confirm 100%. Also users are 
> reporting closing 
> Thunderbird. I can see them logging out and back in in the logs, but email 
> does not replicate 
> or show till I run doveadm sync manually.
> 
> Tempted to have cron invoke that on the regular, but seems very hackish and 
> likely will have 
> its own issues doing that. Since its not the right way or how things were 
> designed. Not sure 
> if this is a bug or what. Hopefully miss-configuration on my end.
You should still include your doveconf -n output. Also any errors and warnings 
logged by dovecot, could be useful.

br,
Teemu Huovila

 
> Open to any feedback, advice, etc. I can provide replicator configuration but 
> its pretty 
> straight forward and mostly copy/paste from the replication page. Replication 
> works, just 
> seems it is not triggered to replicate at times or something.
> 
> dovecot --version 
> 2.2.22 (fe789d2)
> 
> 


Re: Upgrade Dovecot from 2.1.17 to 2.2.13 lmtp child killed with signal 6

2016-03-22 Thread Teemu Huovila


On 22.03.2016 11:43, Ivan Jurišić wrote:
> After upgrade Debian (Wheezy to Jessie) Dovecot version 2.1.17 is
> upgraded to 2.2.13.
> I have random crash of lmtp-a and I got lot message in queue. Any
> solution for this problem?
This looks like it is fixed by 
https://github.com/dovecot/core/commit/98449946caeaf8a3b413a0d93128315b158cbffb
Please upgrade, if possible.

br,
Teemu Huovila

> 
> -- Postqueue --
> 
> 7A5B77F72B  1160457 Tue Mar 22 10:10:15  i...@jurisic.org
> (delivery temporarily suspended: lost connection with
> mail.jurisic.org[private/dovecot-lmtp] while sending end of data --
> message may be sent more than once)
>  ante.starce...@gmail.com
> 
> -- Log file --
> 
> Mar 22 10:10:15 lmtp(23497, i...@jurisic.org): Panic: file fs-api.c:
> line 615 (fs_copy): assertion failed: (src->fs == dest->fs)
> 
> Mar 22 10:10:15 lmtp(23497, i...@jurisic.org): Error: Raw backtrace:
> /usr/lib/dovecot/libdovecot.so.0(+0x6b6fe) [0x7f7647a8b6fe] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x6b7ec) [0x7f7647a8b7ec] ->
> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f7647a428fb] ->
> /usr/lib/dovecot/libdovecot.so.0(fs_copy+0x90) [0x7f7647a4c4a0] ->
> /usr/lib/dovecot/libdovecot-storage.so.0(sdbox_copy+0x4e0)
> [0x7f7647d3ec10] ->
> /usr/lib/dovecot/modules/lib10_quota_plugin.so(+0xbaab) [0x7f764726aaab]
> -> /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_copy+0x7d)
> [0x7f7647d7b01d] ->
> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver_save+0x196)
> [0x7f76480229d6] ->
> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0xf3) [0x7f7648022e13]
> -> dovecot/lmtp(+0x6171) [0x7f7648452171] ->
> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x3f) [0x7f7647a9cd0f]
> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xf9)
> [0x7f7647a9dd09] ->
> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9)
> [0x7f7647a9cd79] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38)
> [0x7f7647a9cdf8] ->
> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13)
> [0x7f7647a47dc3] -> dovecot/lmtp(main+0x165) [0x7f76484509b5] ->
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f7647696b45]
> -> dovecot/lmtp(+0x4a95) [0x7f7648450a95]
> 
> Mar 22 10:10:15 lmtp(23497, i...@jurisic.org): Fatal: master:
> service(lmtp): child 23497 killed with signal 6 (core dumps disabled)
> 
> -- Dovecot configuration  --
> 
> # 2.2.13: /etc/dovecot/dovecot.conf
> # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.3 ext4
> auth_mechanisms = plain login
> debug_log_path = /var/log/dovecot.debug
> default_client_limit = 1
> default_process_limit = 1000
> default_vsz_limit = 512 M
> dict {
>   quota = pgsql:/etc/dovecot/dovecot-dict-sql.conf.ext
> }
> hostname = mail.jurisic.org
> info_log_path = /var/log/dovecot.info
> lda_mailbox_autocreate = yes
> lda_mailbox_autosubscribe = yes
> listen = *
> log_path = /var/log/dovecot.log
> mail_attachment_dir = /var/mail/vhosts/%d/attachment
> mail_home = /var/mail/vhosts/%d/mail/%n
> mail_location = sdbox:/var/mail/vhosts/%d/mail/%n
> mail_plugins = " quota"
> mail_privileged_group = vmail
> managesieve_notify_capability = mailto
> managesieve_sieve_capability = fileinto reject envelope
> encoded-character vacation subaddress comparator-i;ascii-numeric
> relational regex imap4flags copy include variables body enotify
> environment mailbox date ihave
> namespace inbox {
>   inbox = yes
>   location =
>   mailbox Drafts {
> auto = subscribe
> special_use = \Drafts
>   }
>   mailbox Junk {
> auto = subscribe
> special_use = \Junk
>   }
>   mailbox Sent {
> auto = subscribe
> special_use = \Sent
>   }
>   mailbox "Sent Messages" {
> special_use = \Sent
>   }
>   mailbox Trash {
> auto = subscribe
> special_use = \Trash
>   }
>   prefix =
> }
> passdb {
>   args = /etc/dovecot/dovecot-sql.conf.ext
>   driver = sql
> }
> plugin {
>   autocreate = Sent
>   autocreate2 = Drafts
>   autocreate3 = Junk
>   autocreate4 = Trash
>   autosubscribe = Sent
>   autosubscribe2 = Drafts
>   autosubscribe3 = Junk
>   autosubscribe4 = Trash
>   expire = Trash
>   expire2 = Trash/*
>   expire3 = Spam
>   expire_dict = proxy::expire
>   quota = dict:user::proxy::quota
>   quota_rule = *:storage=102400
>   quota_warning = storage=75%% quota-warning 75 %u
>   quota_warning2 = storage=90%% quota-warning 90 %u
>   sieve = ~/.dovecot.sieve
>   sieve_dir = ~/sieve
> }
> postmaster_address = postmaster@%d
> protocols = " imap l

Re: overview zlib efficiency?

2016-03-16 Thread Teemu Huovila


On 15.03.2016 21:45, Sven Hartge wrote:
> Robert L Mathews  wrote:
> 
>> Also keep in mind that even if it does increase CPU usage, it reduces
>> disk usage. This is probably an excellent tradeoff for most people,
>> since most servers are limited by disk throughput/latency more than
>> CPU power.
> 
> IOPS are harder to scale (meaning: cost more to scale) than CPU power.
> 
> And gzip (or lz4 of implemented someday) (or even blosc:
liblz4 has been supported since 2.2.11+ http://wiki2.dovecot.org/Plugins/Zlib

> http://www.blosc.org/. They claim "Designed to transmit data to the
> processor cache faster than a memcpy() OS call.") is effectively free
> with todays CPUs.
> 
> Grüße,
> Sven.
> 


Re: lmtp timeout, locks and crashes

2016-03-15 Thread Teemu Huovila

On 15.03.2016 12:29, Aki Tuomi wrote:
> 
> 
> On 15.03.2016 12:28, Tom Sommer wrote:
>>
>> On 2016-03-15 10:53, Tom Sommer wrote:
>>> I'm seeing some problems on accounts which get a lot of spam (like, a lot).
>>>
>>> I get these errors:
>>
>> When I do a process-list I see a lot of stuck lmtp processes on the same 
>> account:
>>
>> 16180 ?D  0:00  \_ dovecot/lmtp [DATA 172.17.165.14 xxx@xxx]
>> 16181 ?D  0:00  \_ dovecot/lmtp [DATA 172.17.165.14 xxx@xxx]
>>
>> x 600
>>
>> // Tom
> And you are sure this is not related to your NFS?
As a workaround, you could also try different low settings for 
lmtp_user_concurrency_limit and see if it removes the lock contention, but 
keeps lmtp performance bearable.

You do not have any external programs touching the maildir at the same time, 
right?

br,
Teemu Huovila


Re: search problem dovecot 2.2.21 + fts - Solr

2016-02-25 Thread Teemu Huovila


On 24.02.2016 21:14, Anderson Barbosa wrote:
> Hello,
> 
> Realized update dovecot on my server. Now the search is returning
> differently from the previous version bringing reference information of
> other messages .
> For example when doing a search for anderson.joao this new version of the
> dovecot dovecot 2.2.21 + fts - Solr response will be all email that has the
> word anderson and joao, instead of returning only items with the word
> anderson.joao.
> 
> Before used version 2.2.18 + dovecot fts - Solr and the problem did not
> occur .
> For example practical test :
> 
> Dovecot 2.2.18
> 
> 
> # telnet SERVER 143
> Trying SERVER...
> Connected to SERVER.
> Escape character is '^]'.
> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE
> AUTH=PLAIN] Zimbra IMAP4.
> a login co...@conta.com.br 1223456
> a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE
> SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT
> MULTIAPPEND URL-PARTIAn
> a select  INBOX
> * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
> * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags
> permitted.
> * 14 EXISTS
> * 0 RECENT
> * OK [UIDVALIDITY 1452548222] UIDs valid
> * OK [UIDNEXT 25] Predicted next UID
> * OK [HIGHESTMODSEQ 52] Highest
> a OK [READ-WRITE] Select completed (0.001 secs).
> a SEARCH text "anderson"
> * SEARCH 11 12 (2 found emails)
> a OK Search completed (0.265 secs).
> a SEARCH text "joao"
> * SEARCH 13 14 (2 found emails)
> a OK Search completed (0.003 secs).
> a SEARCH text "anderson.joao"
> * SEARCH (0 found emails)
> a OK Search completed (0.004 secs).
> a logout
> * BYE Logging out
> a OK Logout completed.
> Connection closed by foreign host.
> 
> 
> Dovecot 2.2.21
> 
> # telnet SERVER 143
> Trying SERVER...
> Connected to SERVER.
> Escape character is '^]'.
> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE
> AUTH=PLAIN] Zimbra IMAP4.
> a login co...@conta.com.br 1223456
> a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE
> SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT
> MULTIAPPEND URL-PARTIAn
> a select INBOX
> * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
> * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags
> permitted.
> * 14 EXISTS
> * 0 RECENT
> * OK [UIDVALIDITY 1452548222] UIDs valid
> * OK [UIDNEXT 25] Predicted next UID
> * OK [HIGHESTMODSEQ 52] Highest
> a OK [READ-WRITE] Select completed (0.000 + 0.000 secs).
> a SEARCH text "anderson"
> * SEARCH 11 12 (2 found emails)
> a OK Search completed (0.004 + 0.000 secs).
> a SEARCH text "joao" (2 found emails)
> * SEARCH 13 14
> a OK Search completed (0.005 + 0.000 secs).
> a SEARCH text "anderson.joao"
> * SEARCH 11 12 13 14 *(4 found emails)*
> a OK Search completed (0.005 + 0.000 secs).
> a logout
> * BYE Logging out
> a OK Logout completed.
> Connection closed by foreign host.
> 
> Even using characters Special "" \ scape, ' ' for an answer will always be
> all emails with the word anderson and joao.
> Checking the Changelog dovecot noticed que NAS versions Previous v2.2.20
> and v2.2.19 certain changes with respect to fts .
> 
> There Have Another way to Make Search for Exact Word In this new version to
> loft?

This is most likely fixed by  
https://github.com/dovecot/core/commit/f3b0efdcbd0bd9059574c8f86d6cb43e16c8e521 
The fix will be included in 2.2.22, which will hopefully be released some time 
mid-march.
If you can, please test with a build from current git master tip and let us 
know, if it does not.

br,
Teemu Huovila


Re: [BUG] 2.2.21 Panic: file imap-client.c: line 841 (client_check_command_hangs): assertion failed: (!have_wait_unfinished || unfinished_count > 0)

2016-01-04 Thread Teemu Huovila


On 04.01.2016 12:28, Florian Pritz wrote:
> Hi,
> 
> I'm seeing the following in my logs. Twice so far, no idea what caused
> it. I do however have the core dump if that is of any use.
> 
>> Jan  4 11:14:11 karif dovecot[20876]: imap(username): Panic: file 
>> imap-client.c: line 841 (client_check_command_hangs): assertion failed: 
>> (!have_wait_unfinished || unfinished_count > 0)
>> Jan  4 11:14:11 karif dovecot[20876]: imap(username): Error: Raw backtrace: 
>> /usr/lib/dovecot/libdovecot.so.0(+0xa64ea) [0x7f6f99fa64ea] -> 
>> /usr/lib/dovecot/libdovecot.so.0(+0xa7a18) [0x7f6f99fa7a18] -> 
>> /usr/lib/dovecot/libdovecot.so.0(i_
>> fatal+0) [0x7f6f99fa686d] -> dovecot/imap() [0x41dde6] -> 
>> dovecot/imap(client_continue_pending_input+0xd6) [0x41df50] -> 
>> dovecot/imap() [0x4122a9] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0xcd) [0x7f6f99fc3b01] -> 
>> /usr/lib/dovec
>> ot/libdovecot.so.0(io_loop_handler_run_internal+0x209) [0x7f6f99fc606e] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x24) [0x7f6f99fc3caa] 
>> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0xaf) [0x7f6f99fc3bf6] -> 
>> /usr/lib/dovec
>> ot/libdovecot.so.0(master_service_run+0x2e) [0x7f6f99f317af] -> 
>> dovecot/imap(main+0x2da) [0x430da2] -> 
>> /usr/lib/libc.so.6(__libc_start_main+0xf0) [0x7f6f99b7c610] -> 
>> dovecot/imap(_start+0x29) [0x40c639]
>> Jan  4 11:14:11 karif dovecot[20876]: imap(username): Fatal: master: 
>> service(imap): child 19463 killed with signal 6 (core dumped)
> 
> In case it is easier to read, here's a gdb backtrace:
> 
>> #0  0x7f6f99b8f5f8 in raise () from /usr/lib/libc.so.6
>> #1  0x7f6f99b90a7a in abort () from /usr/lib/libc.so.6
>> #2  0x7f6f99fa6539 in default_fatal_finish (type=LOG_TYPE_PANIC, 
>> status=0) at failures.c:201
>> #3  0x7f6f99fa7a18 in i_internal_fatal_handler (ctx=0x7ffe660d4700, 
>> format=0x438a60 "file %s: line %d (%s): assertion failed: (%s)", 
>> args=0x7ffe660d4720) at failures.c:670
>> #4  0x7f6f99fa686d in i_panic (format=0x438a60 "file %s: line %d (%s): 
>> assertion failed: (%s)") at failures.c:275
>> #5  0x0041dde6 in client_check_command_hangs (client=0x2363450) at 
>> imap-client.c:841
>> #6  0x0041df50 in client_continue_pending_input (client=0x2363450) 
>> at imap-client.c:884
>> #7  0x004122a9 in idle_client_input (ctx=0x23642a8) at cmd-idle.c:111
>> #8  0x7f6f99fc3b01 in io_loop_call_io (io=0x2374600) at ioloop.c:559
>> #9  0x7f6f99fc606e in io_loop_handler_run_internal (ioloop=0x232c740) at 
>> ioloop-epoll.c:220
>> #10 0x7f6f99fc3caa in io_loop_handler_run (ioloop=0x232c740) at 
>> ioloop.c:607
>> #11 0x7f6f99fc3bf6 in io_loop_run (ioloop=0x232c740) at ioloop.c:583
>> #12 0x7f6f99f317af in master_service_run (service=0x232c5e0, 
>> callback=0x430a35 ) at master-service.c:640
>> #13 0x00430da2 in main (argc=1, argv=0x232c390) at main.c:442
Thank you for the report. Could you execute "bt full" in gdb please. Also the 
output of doveconf -n would be useful.

br,
Teemu Huovila


Re: Replication - hostname issue

2016-01-04 Thread Teemu Huovila


On 03.01.2016 20:53, Petter Gunnerud wrote:
> I have postfix/dovecot setup on a virtual gentoo server. It's been in service 
> for almost two years without any issues.
> Now I want to set up a spare server to replicate mails from the one running. 
> I copied the vm files to the second host server, changed the ip, hostname and 
> hosts file for the copy and followed the dovecot replication doc.
> The last point in the doc tells to rundovecot --hostdomainto make sure the 
> hostnames differ... To my surprise the command returned "localhost" on both 
> servers.
> How do I set the hostname for dovecot service?(The hostname command returns 
> the servers hostnames.)
You can define what hostname dovecot uses, by setting the environment variable 
DOVECOT_HOSTDOMAIN.

It would also be interesting to see what "getent hosts" returns on your 
servers. Using the "hostname" command is not quite the same thing.

> Will a change in dovecot hostname make all clients redownload all their mail?
I can not speak for any/all clients, but I do not see how a change in the imap 
server name would make clients redownload email.


> A second question. When configuring dovecot replication. Should the settings 
> be done on the master server or the stand by server, or both? (Doc doesn't 
> say anything regarding this)
> PG
This depends on how you want the syncding to be done. If it is two way 
master-master, then you need to configure it on both. I think this might 
probably be what you want anyway.

br,
Teemu Huovila


Re: Dovecot 2.2.20 autoexpunge

2015-12-22 Thread Teemu Huovila


On 22.12.2015 19:24, Robert Blayzor wrote:
>> On Dec 22, 2015, at 9:49 AM, Dominik Breu <domi...@dominikbreu.de> wrote:
>> the autoexpunge feature only removes mails wich have the  \Delete Flag so no 
>> deletion of mails wich doesn't have this Flag(see 
>> https://tools.ietf.org/html/rfc4315#section-2.1 or 
>> http://wiki2.dovecot.org/Tools/Doveadm/Expunge)
>>
>> you could run a cron job with a doveadm move comando to move the mail to an 
>> othe mailbox (see http://wiki2.dovecot.org/Tools/Doveadm/Move)
> 
> 
> Ok, but that’s not how the documentation for expire or this features reads. 
> According to the docs it’s based off the “saved timestamp”, not a deleted 
> flag? So it’s based off “saved timestamp” *and* deleted flag?
The autoexpunge feature does not check the \Deleted flag.

Are any errors logged in "doveadm log errors"? Could you post your complete 
output of doveconf -n please.

br,
Teemu Huovila


Re: EVP_PKEY_get1_EC_KEY:expecting a ec key

2015-12-07 Thread Teemu Huovila


On 07.12.2015 12:23, Oliver Riesen-Mallmann wrote:
> Hi,
> 
> since my last update from the Dovecot Prebuilt Binary for Debian I get a
> lot of messages like this in mail.log:
> 
> dovecot: imap-login: Error: SSL: Stacked error: error:0608308E:digital
> envelope routines:EVP_PKEY_get1_EC_KEY:expecting a ec key
> 
> Nevertheless Dovecot seems to work normally. Email client doesn't
> mention any error.
> 
> This was my last update:
> 
> Start-Date: 2015-12-04  14:00:31
> Commandline: apt-get upgrade
> Upgrade: dovecot-core:amd64 (2.2.19-1~auto+98, 2.2.20~rc1-1~auto+3),
> dovecot-managesieved:amd64 (2.2.19-1~auto+98, 2.2.20~rc1-1~auto+3),
> dovecot-sieve:amd64 (2.2.19-1~auto+98, 2.2.20~rc1-1~auto+3),
> dovecot-imapd:amd64 (2.2.19-1~auto+98, 2.2.20~rc1-1~auto+3),
> dovecot-pop3d:amd64 (2.2.19-1~auto+98, 2.2.20~rc1-1~auto+3),
> openssl:amd64 (1.0.1e-2+deb7u17, 1.0.1e-2+deb7u18),
> libssl1.0.0:amd64 (1.0.1e-2+deb7u17, 1.0.1e-2+deb7u18)
> End-Date: 2015-12-04  14:00:53
> 
> My current Dovecot version: 2.2.20.rc1 (0b81127e53da)
> on Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64
> 
> Is it a bug in Dovecot or in openssl/libssl?
Could you post your doveconf -n output?

br,
Teemu Huovila


Re: Patch for Dovecot 2.2.19 compilation failure

2015-11-06 Thread Teemu Huovila
Contributions of all sizes are appreciated. Thank you.

br,
Teemu Huovila

On 06.11.2015 00:21, Hammer, Adam wrote:
> I’m almost embarrassed to submit the following patch.
> 
>  cut here 
> 
> --- dovecot-2.2.19/src/lib/bits.h.orig  Thu Mar 19 08:42:32 2015
> +++ dovecot-2.2.19/src/lib/bits.h   Thu Nov  5 15:10:35 2015
> @@ -10,7 +10,9 @@
>  
>  #include 
>  #include 
> -#include 
> +#ifdef HAVE_STDINT_H
> +#  include 
> +#endif
>  
>  #define UINT64_SUM_OVERFLOWS(a, b) \
> (a > (uint64_t)-1 - b)
> 
> 


Re: updating and wsitching repo to yum.dovecot.fi - Unknown protocol: sieve

2015-10-30 Thread Teemu Huovila


On 30.10.2015 12:18, Götz Reinicke - IT Koordinator wrote:
> Hi,
> 
> winter is coming and so I start to clean up some left overs of the year.
> 
> One thing is to use the yum.dovecot.fi repository.
> 
> After installing the current availabel dovecot and dovecot-ee-pigeonhole
> package and restarting dovecot I do get the error:
> 
> 
> doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf:
> protocols: Unknown protocol: sieve
Could you please reply with the output of doveconf -n


> 
> 
> Is the sieve protocol an extra package? I thought in the 2.2. tree I
> dont have to do bigger config changes.
> 
> We run already 2.2.something from city-fan.org and the switch was also
> the idea of going to the most recent release.
> 
> 
>   Thanks for hints and feedback . Götz
> 


Re: updating and wsitching repo to yum.dovecot.fi - Unknown protocol: sieve

2015-10-30 Thread Teemu Huovila


On 30.10.2015 15:35, Götz Reinicke - IT Koordinator wrote:
> Am 30.10.15 um 11:49 schrieb Teemu Huovila:
>>
>>
>> On 30.10.2015 12:18, Götz Reinicke - IT Koordinator wrote:
>>> Hi,
>>>
>>> winter is coming and so I start to clean up some left overs of the year.
>>>
>>> One thing is to use the yum.dovecot.fi repository.
>>>
>>> After installing the current availabel dovecot and dovecot-ee-pigeonhole
>>> package and restarting dovecot I do get the error:
>>>
>>>
>>> doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf:
>>> protocols: Unknown protocol: sieve
>> Could you please reply with the output of doveconf -n
> 
> 
> my guess: in the currently used rpms the "managesieve" libs are
> included; for the official dovecot repo I do have to install the
> dovecot-ee-managesieve.rpm too...
Yes, if you have "protocols = sieve .." then you need the managesieve package 
too. Also, to use sieve filtering, you need to load the sieve plugin in 
mail_plugins for lmtp or lda. Please refer to 
http://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration
http://wiki2.dovecot.org/Pigeonhole/ManageSieve/Configuration

Teemu
 
> 
> # 2.2.18.2 (866bffbafde7): /etc/dovecot/dovecot.conf
> # OS: Linux 2.6.18-371.6.1.el5xen x86_64 CentOS release 5.11 (Final)
> auth_debug = yes
> auth_master_user_separator = *
> auth_mechanisms = plain login
> auth_verbose = yes
> default_client_limit = 4000
> default_process_limit = 4000
> disable_plaintext_auth = no
> log_path = /var/log/dovecot.log
> login_trusted_networks = 193.196.129.21
> mail_debug = yes
> mail_location = maildir:~/Maildir
> mail_plugins = mail_log notify quota acl
> mail_privileged_group = mail
> mdbox_rotate_size = 10 M
> namespace {
>   list = children
>   location = maildir:%%h/Maildir:INDEX=%h/shared/%%u:CONTROL=%h/shared/%%u
>   prefix = shared/%%u/
>   separator = /
>   subscriptions = yes
>   type = shared
> }
> namespace inbox {
>   inbox = yes
>   location =
>   mailbox Drafts {
> special_use = \Drafts
>   }
>   mailbox Junk {
> special_use = \Junk
>   }
>   mailbox Sent {
> special_use = \Sent
>   }
>   mailbox "Sent Messages" {
> special_use = \Sent
>   }
>   mailbox Trash {
> special_use = \Trash
>   }
>   prefix =
>   separator = /
> }
> passdb {
>   args = /etc/dovecot/master-users
>   driver = passwd-file
>   master = yes
> }
> passdb {
>   args = /etc/dovecot/dovecot-ldap.conf.ext
>   driver = ldap
> }
> plugin {
>   acl = vfile
>   acl_shared_dict = file:/var/lib/dovecot/db/shared-mailboxes
>   quota = dict:User quota::noenforcing:file:%h/dovecot-quota
>   quota_rule = *:storage=5G
>   quota_rule2 = Trash:storage=+100M
>   quota_warning = storage=95%% quota-warning 95 %u
>   quota_warning2 = storage=80%% quota-warning 80 %u
>   sieve = ~/.dovecot.sieve
>   sieve_dir = ~/sieve
> }
> postmaster_address = postmas...@filmakademie.de
> protocols = imap pop3 lmtp sieve sieve
> quota_full_tempfail = yes
> service auth {
>   unix_listener /var/spool/postfix/private/auth {
> mode = 0666
>   }
>   unix_listener auth-userdb {
> group = vmail
> user = vmail
>   }
>   user = root
> }
> service imap-login {
>   process_limit = 1024
>   process_min_avail = 16
>   service_count = 0
> }
> service imap {
>   process_limit = 1024
> }
> service lmtp {
>   inet_listener lmtp {
> address = 127.0.0.1
> port = 24
>   }
> }
> service managesieve-login {
>   inet_listener sieve {
> port = 4190
>   }
>   service_count = 1
> }
> service managesieve {
>   process_limit = 1024
> }
> service pop3-login {
>   process_limit = 1024
>   process_min_avail = 16
>   service_count = 0
> }
> service pop3 {
>   process_limit = 1024
> }
> service quota-warning {
>   executable = script /usr/local/bin/quota-warning.sh
>   unix_listener quota-warning {
> user = vmail
>   }
>   user = dovecot
> }
> ssl_ca =  ssl_cert =  ssl_cipher_list =
> DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ALL:!LOW:!SSLv2:!EXP:!aNULL
> ssl_key =  ssl_prefer_server_ciphers = yes
> userdb {
>   args = /etc/dovecot/dovecot-ldap.conf.ext
>   driver = ldap
> }
> verbose_proctitle = yes
> doveconf: Error: protocols: Unknown protocol: sieve
> protocol lmtp {
>   info_log_path = /var/log/dovecot-lmtp.log
>   log_path = /var/log/dovecot-lmtp-errors.log
>   mail_plugins = mail_log notify quota acl sieve
> }
> protocol imap {
>   mail_max_userip_connections = 20
>   mail_plugins = mail_log notify quota acl imap_zlib imap_quota imap_acl
> }
> protocol pop3 {
>   mail_max_userip_connections = 20
> }
> doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf:
> protocols: Unknown protocol: sieve
> 
> 


Re: events

2015-10-29 Thread Teemu Huovila


On 28.10.2015 19:11, Alessio Cecchi wrote:
> Il 26.10.2015 12:04 Teemu Huovila ha scritto:
>> On 26.10.2015 12:44, Frederik Bosch | Genkgo wrote:
>>> Teemu,
>>>
>>> If just need the http request, I will need something like the following 
>>> configuration, right? So no meta data plugin, but with notify and 
>>> push_notification?
>>>
>>> protocol lmtp {
>>>   mail_plugins = $mail_plugins notify push_notification
>>> }
>>>
>>> plugin {
>>>  push_notification_driver = ox:url=http://myurl/ 
>>> <http://login:p...@node1.domain.tld:8009/preliminary/http-notify/v1/notify>
>>> }
>> You could test that, but my understanding of the ox push driver code
>> is that it completely depends on metadata and will not do anything
>> useful, if no metadata is set. Perhaps Michael can correct me, if Im
>> wrong.
>>
>> If you want some subset of the ox driver functionality, you could try
>> implementing your own driver, based on the existing code.
> 
> Hi, I'm interested to testing push_notification with ox driver (I need only a 
> GET when a new message arrived) but I don't understand how to insert METADATA 
> information via IMAP for an user.
The notification is done with a http PUT. The IMAP METADATA is set with the 
SETMETADATA command (https://tools.ietf.org/html/rfc5464#section-4.3).

As to how to register for the notifications, the best documentation is probably 
the source. You can see it in either 
http://hg.dovecot.org/dovecot-2.2/file/9654ab4c337c/src/plugins/push-notification/push-notification-driver-ox.c
 or maybe more easily in the OX source code file 
backend/com.openexchange.push.dovecot/src/com/openexchange/push/dovecot/commands/RegistrationCommand.java
 You can get the backend source by git clone 
https://code.open-xchange.com/git/wd/backend

An example would be something like: SETMETADATA "" 
(/private/vendor/vendor.dovecot/http-notify "user=myusername")

br,
Teemu Huovila


Re: events

2015-10-26 Thread Teemu Huovila


On 26.10.2015 09:45, Frederik Bosch | Genkgo wrote:
> Ah fantastic. Now I guess I can use notify plugin without push_notification 
> metadata plugins, right?
Im not sure I understand the question correctly. I understood from the thread 
that you would be writing a driver for the push-notification plugin, so you 
need to load that plugin. In case you mean the imap_metadata = yes setting, you 
do not need that, if your driver does not use metadata.

Teemu

> 
> On 26-10-15 08:36, Teemu Huovila wrote:
>>
>> On 26.10.2015 08:59, Frederik Bosch | Genkgo wrote:
>>> Thanks again. Final question: how do I configure this plugin?
>> As the only existing driver at the moment is the OX one, the plugin is 
>> documented in OX wiki at
>> http://oxpedia.org/wiki/index.php?title=AppSuite:OX_Mail#Setup_of_the_Dovecot_Push
>>
>> br,
>> Teemu Huovila
>>
>>>
>>>
>>> On 23-10-15 16:12, Michael M Slusarz wrote:
>>>> On 10/22/2015 12:46 AM, Frederik Bosch | Genkgo wrote:
>>>>
>>>>> Thanks a lot! After looking at the source, I guess the ox driver will
>>>>> do. Maybe, when other people find this thread, you could tell what dlog
>>>>> is. Because I do not know it, and googling came up with little results.
>>>> "dlog" is nothing more than a push-notification backend that will log 
>>>> various information and hook triggers (at a DEBUG level) to the Dovecot 
>>>> log.  It's meant for debugging and development purposes.
>>>>
>>>> "dlog" stands for either "Dovecot LOGging" or "Debug LOGging", whichever 
>>>> you prefer.
>>>>
>>>> michael
>>>>
>>>>
>>>>> On 21-10-15 23:33, Michael M Slusarz wrote:
>>>>>> On 10/21/2015 9:07 AM, Frederik Bosch | Genkgo wrote:
>>>>>>
>>>>>>> We want to trigger a script after certain actions by the user (event).
>>>>>>> This script inserts the action into message queue (e.g. Rabbit MQ)
>>>>>>> accompanied with some data. Then one or more workers picks up the action
>>>>>>> from the message queue and do something with it. The question is: how
>>>>>>> can I trigger the script from dovecot?
>>>>>> This is precisely what the new push-notification plugin is for
>>>>>> (2.2.19).  (You will need to write a driver to interact with your
>>>>>> notification handler, similar to the "dlog" or "ox" drivers.)
>>>>>>
>>>>>> michael


Re: events

2015-10-26 Thread Teemu Huovila


On 26.10.2015 12:44, Frederik Bosch | Genkgo wrote:
> Teemu,
> 
> If just need the http request, I will need something like the following 
> configuration, right? So no meta data plugin, but with notify and 
> push_notification?
> 
> protocol lmtp {
>   mail_plugins = $mail_plugins notify push_notification
> }
> 
> plugin {
>  push_notification_driver = ox:url=http://myurl/ 
> <http://login:p...@node1.domain.tld:8009/preliminary/http-notify/v1/notify>
> }
You could test that, but my understanding of the ox push driver code is that it 
completely depends on metadata and will not do anything useful, if no metadata 
is set. Perhaps Michael can correct me, if Im wrong.

If you want some subset of the ox driver functionality, you could try 
implementing your own driver, based on the existing code.

br,
Teemu

> 
> 
> Regards,
> Frederik
> 
> 
> 
> On 26-10-15 11:35, Teemu Huovila wrote:
>>
>> On 26.10.2015 09:45, Frederik Bosch | Genkgo wrote:
>>> Ah fantastic. Now I guess I can use notify plugin without push_notification 
>>> metadata plugins, right?
>> Im not sure I understand the question correctly. I understood from the 
>> thread that you would be writing a driver for the push-notification plugin, 
>> so you need to load that plugin. In case you mean the imap_metadata = yes 
>> setting, you do not need that, if your driver does not use metadata.
>>
>> Teemu
>>
>>> On 26-10-15 08:36, Teemu Huovila wrote:
>>>> On 26.10.2015 08:59, Frederik Bosch | Genkgo wrote:
>>>>> Thanks again. Final question: how do I configure this plugin?
>>>> As the only existing driver at the moment is the OX one, the plugin is 
>>>> documented in OX wiki at
>>>> http://oxpedia.org/wiki/index.php?title=AppSuite:OX_Mail#Setup_of_the_Dovecot_Push
>>>>
>>>> br,
>>>> Teemu Huovila
>>>>
>>>>>
>>>>> On 23-10-15 16:12, Michael M Slusarz wrote:
>>>>>> On 10/22/2015 12:46 AM, Frederik Bosch | Genkgo wrote:
>>>>>>
>>>>>>> Thanks a lot! After looking at the source, I guess the ox driver will
>>>>>>> do. Maybe, when other people find this thread, you could tell what dlog
>>>>>>> is. Because I do not know it, and googling came up with little results.
>>>>>> "dlog" is nothing more than a push-notification backend that will log 
>>>>>> various information and hook triggers (at a DEBUG level) to the Dovecot 
>>>>>> log.  It's meant for debugging and development purposes.
>>>>>>
>>>>>> "dlog" stands for either "Dovecot LOGging" or "Debug LOGging", whichever 
>>>>>> you prefer.
>>>>>>
>>>>>> michael
>>>>>>
>>>>>>
>>>>>>> On 21-10-15 23:33, Michael M Slusarz wrote:
>>>>>>>> On 10/21/2015 9:07 AM, Frederik Bosch | Genkgo wrote:
>>>>>>>>
>>>>>>>>> We want to trigger a script after certain actions by the user (event).
>>>>>>>>> This script inserts the action into message queue (e.g. Rabbit MQ)
>>>>>>>>> accompanied with some data. Then one or more workers picks up the 
>>>>>>>>> action
>>>>>>>>> from the message queue and do something with it. The question is: how
>>>>>>>>> can I trigger the script from dovecot?
>>>>>>>> This is precisely what the new push-notification plugin is for
>>>>>>>> (2.2.19).  (You will need to write a driver to interact with your
>>>>>>>> notification handler, similar to the "dlog" or "ox" drivers.)
>>>>>>>>
>>>>>>>> michael
> 


Re: events

2015-10-26 Thread Teemu Huovila


On 26.10.2015 08:59, Frederik Bosch | Genkgo wrote:
> Thanks again. Final question: how do I configure this plugin?
As the only existing driver at the moment is the OX one, the plugin is 
documented in OX wiki at
http://oxpedia.org/wiki/index.php?title=AppSuite:OX_Mail#Setup_of_the_Dovecot_Push

br,
Teemu Huovila

> 
> 
> 
> On 23-10-15 16:12, Michael M Slusarz wrote:
>> On 10/22/2015 12:46 AM, Frederik Bosch | Genkgo wrote:
>>
>>> Thanks a lot! After looking at the source, I guess the ox driver will
>>> do. Maybe, when other people find this thread, you could tell what dlog
>>> is. Because I do not know it, and googling came up with little results.
>>
>> "dlog" is nothing more than a push-notification backend that will log 
>> various information and hook triggers (at a DEBUG level) to the Dovecot log. 
>>  It's meant for debugging and development purposes.
>>
>> "dlog" stands for either "Dovecot LOGging" or "Debug LOGging", whichever you 
>> prefer.
>>
>> michael
>>
>>
>>> On 21-10-15 23:33, Michael M Slusarz wrote:
>>>> On 10/21/2015 9:07 AM, Frederik Bosch | Genkgo wrote:
>>>>
>>>>> We want to trigger a script after certain actions by the user (event).
>>>>> This script inserts the action into message queue (e.g. Rabbit MQ)
>>>>> accompanied with some data. Then one or more workers picks up the action
>>>>> from the message queue and do something with it. The question is: how
>>>>> can I trigger the script from dovecot?
>>>>
>>>> This is precisely what the new push-notification plugin is for
>>>> (2.2.19).  (You will need to write a driver to interact with your
>>>> notification handler, similar to the "dlog" or "ox" drivers.)
>>>>
>>>> michael


Re: IMAP hibernate feature committed

2015-08-28 Thread Teemu Huovila
On 08/27/2015 07:39 PM, Thomas Leuxner wrote:
 * Teemu Huovila teemu.huov...@dovecot.fi 2015.08.27 13:58:
 
 Did you specify a value other than zero for 'imap_hibernate_timeout'?
 
 Yes I did:
 
 $ doveconf imap_hibernate_timeout
 imap_hibernate_timeout = 1 mins
 
 I sometimes see one imap-hibernate process (only one), but several imap 
 processes active which should be idling... 
Does should be mean you know or suspect the clients have issued the IMAP IDLE 
command more than one minute ago?
If yes and you dont see any errors in Dovecot logs, I do not know why that is.

Teemu


Re: question on autch cache parameters

2015-08-27 Thread Teemu Huovila
Hello

Thank you for your report. We really appreciate it, especially when you can 
pinpoint a commit.

However, I am unable to reproduce this. Could you post your doveconf -n please? 
Im especially interested in your passdb and
userdb configurations and auth-cache settings.

br,
Teemu Huovila


On 08/06/2015 01:07 PM, matthias lay wrote:
 hi timo,
 
 I checked out the commit causing this.
 
 its this one:
 
 http://hg.dovecot.org/dovecot-2.2/diff/5e445c659f89/src/auth/auth-request.c#l1.32
 
 
 if I move this block back as it was. everything is fine
 
 
 diff -r a46620d6e0ff -r 5e445c659f89 src/auth/auth-request.c
 --- a/src/auth/auth-request.c Tue May 05 13:35:52 2015 +0300
 +++ b/src/auth/auth-request.c Tue May 05 14:16:31 2015 +0300
 @@ -618,30 +627,28 @@
  auth_request_want_skip_passdb(request, next_passdb))
   next_passdb = next_passdb-next;
 
 + if (*result == PASSDB_RESULT_OK) {
 + /* this passdb lookup succeeded, preserve its extra fields */
 + auth_fields_snapshot(request-extra_fields);
 + request-snapshot_have_userdb_prefetch_set =
 + request-userdb_prefetch_set;
 + if (request-userdb_reply != NULL)
 + auth_fields_snapshot(request-userdb_reply);
 + } else {
 + /* this passdb lookup failed, remove any extra fields it set */
 + auth_fields_rollback(request-extra_fields);
 + if (request-userdb_reply != NULL) {
 + auth_fields_rollback(request-userdb_reply);
 + request-userdb_prefetch_set =
 + request-snapshot_have_userdb_prefetch_set;
 + }
 + }
 +
   if (passdb_continue  next_passdb != NULL) {
   /* try next passdb. */
  request-passdb = next_passdb;
   request-passdb_password = NULL;
 
 - if (*result == PASSDB_RESULT_OK) {
 - /* this passdb lookup succeeded, preserve its extra
 -fields */
 - auth_fields_snapshot(request-extra_fields);
 - request-snapshot_have_userdb_prefetch_set =
 - request-userdb_prefetch_set;
 - if (request-userdb_reply != NULL)
 - auth_fields_snapshot(request-userdb_reply);
 - } else {
 - /* this passdb lookup failed, remove any extra fields
 -it set */
 - auth_fields_rollback(request-extra_fields);
 - if (request-userdb_reply != NULL) {
 - auth_fields_rollback(request-userdb_reply);
 - request-userdb_prefetch_set =
 - 
 request-snapshot_have_userdb_prefetch_set;
 - }
 - }
 -
   if (*result == PASSDB_RESULT_USER_UNKNOWN) {
   /* remember that we did at least one successful
  passdb lookup */
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 On 08/05/2015 05:33 PM, matthias lay wrote:
 just tested against dovecot 2.2.15

 everythings works fine. so might be a bug introduced between 2.2.16 and
 2.2.18





 On 08/05/2015 04:30 PM, matthias lay wrote:
 Hi list,

 I have a question on auth caching in 2.2.18.

 I am using acl_groups for a master user, appended in a static userdb file

 # snip ###
 master@uma:{SHA}=::userdb_acl_groups=umareadmaster
 allow_nets=127.0.0.1
 # snap ###

 and use this group in a global ACL file.
 I discovered this only works on first NOT-cached login



 environment in imap-postlogin script on first login:


 AUTH_TOKEN=e96b5a32ceb2cafc4460c210ad2e92e3d7ab388c
 MASTER_USER=master@uma
 SPUSER=private/pdf
 LOCAL_IP=127.0.0.1
 USER=pdf
 AUTH_USER=master@uma
 PWD=/var/run/dovecot
 USERDB_KEYS=ACL_GROUPS HOME SPUSER MASTER_USER AUTH_TOKEN AUTH_USER
 SHLVL=1
 HOME=/var/data/vmail/private/pdf
 ACL_GROUPS=umareadmaster
 IP=127.0.0.1
 _=/usr/bin/env


 on the second cached login it looks like this


 AUTH_TOKEN=12703b11932f233520f6d4b33559c33aeb1cfc7f
 MASTER_USER=master@uma
 SPUSER=private/pdf
 LOCAL_IP=127.0.0.1
 USER=pdf
 AUTH_USER=master@uma
 PWD=/var/run/dovecot
 USERDB_KEYS=HOME SPUSER MASTER_USER AUTH_TOKEN AUTH_USER
 SHLVL=1
 HOME=/var/data/vmail/private/pdf
 IP=127.0.0.1
 _=/usr/bin/env

 so the ACL_GROUPS is gone.

 is this intended to be like that.
 so groups not included in cache and I have to find another approach?

 anybody else encountered similar problems with some auth Variables and
 caching?


 Greetz Matze




Re: IMAP hibernate feature committed

2015-08-27 Thread Teemu Huovila
On 08/26/2015 01:33 PM, Thomas Leuxner wrote:
 * Timo Sirainen t...@iki.fi 2015.08.25 22:21:
 
 There's no good default setting here. It depends on your userdb settings 
 and/or mail_uid setting. So for example if your imap processes are running 
 as vmail user, you should set service imap-hibernate { unix_listener 
 imap-hibernate { user = vmail } }. Then again if you are using system users 
 (or otherwise multiple UIDs) it gets more difficult to implement this 
 securely (mode=0666 works always, but security isn't too good). This same 
 problem exists for various other parts of Dovecot, for example 
 indexer-worker and dict services.
 
 I have it working (I guess) with these user settings (virtual users using 
 'vmail'):
 
 service imap-hibernate {
   unix_listener imap-hibernate {
 user = vmail
   }
 }
 
 I had to assign the imap-master socket the user the imap-hibernate process is 
 using to avoid messages like this:
 
 Aug 25 23:16:02 nihlus dovecot: imap-hibernate(t...@leuxner.net): Error: 
 net_connect_unix(/var/run/dovecot/imap-master) failed: Permission denied
 Aug 25 23:16:02 nihlus dovecot: imap-hibernate(t...@leuxner.net): Failed to 
 connect to master socket in=126 out=944 hdr=0 body=0 del=0 exp=0 trash=0
 
 service imap {
   unix_listener imap-master {
 user = dovecot
   }
 }
 
 With this I see messages like this in the logs:
 
 Aug 26 09:48:06 nihlus dovecot: imap-hibernate(t...@leuxner.net): Connection 
 closed in=189 out=4252 hdr=0 body=0 del=0 exp=0 trash=0
 Aug 26 12:20:29 nihlus dovecot: imap-hibernate(t...@leuxner.net): Connection 
 closed in=109 out=4714 hdr=0 body=0 del=0 exp=0 trash=0
 
 I'm a bit puzzled as to when hibernate actually kicks in because most of the 
 time I see normal imap processes running without them being hibernated:
Did you specify a value other than zero for 'imap_hibernate_timeout'?

br,
Teemu

 $ ps aux | grep dovecot/imap
 dovenull  6791  0.0  0.0  18196  4772 ?S06:39   0:00 
 dovecot/imap-login
 dovenull  7107  0.0  0.0  18196  4736 ?S08:00   0:00 
 dovecot/imap-login
 dovenull  7112  0.0  0.0  18332  4492 ?S08:00   0:00 
 dovecot/imap-login
 dovenull  7333  0.0  0.0  18332  4772 ?S08:45   0:00 
 dovecot/imap-login
 dovenull  7675  0.0  0.0  18196  4628 ?S10:13   0:00 
 dovecot/imap-login
 dovenull  7677  0.0  0.0  18332  4532 ?S10:14   0:00 
 dovecot/imap-login
 dovenull  7821  0.0  0.0  18196  4532 ?S10:44   0:00 
 dovecot/imap-login
 dovenull  8156  0.0  0.0  18196  4756 ?S12:01   0:00 
 dovecot/imap-login
 vmail 8157  0.0  0.0  45624  9608 ?S12:01   0:00 dovecot/imap
 dovenull  8158  0.0  0.0  18332  4628 ?S12:01   0:00 
 dovecot/imap-login
 vmail 8159  0.0  0.0  44772  9256 ?S12:01   0:00 dovecot/imap
 dovenull  8160  0.0  0.0  18196  4652 ?S12:01   0:00 
 dovecot/imap-login
 vmail 8161  0.0  0.0  46072  9760 ?S12:01   0:00 dovecot/imap
 dovenull  8162  0.0  0.0  18196  4548 ?S12:01   0:00 
 dovecot/imap-login
 dovenull  8279  0.0  0.0  18332  4736 ?S12:22   0:00 
 dovecot/imap-login
 vmail 8280  0.0  0.0  40712  5164 ?S12:22   0:00 dovecot/imap
 dovenull  8341  0.0  0.0  18196  4740 ?S12:25   0:00 
 dovecot/imap-login
 vmail 8344  0.0  0.0  46312 10568 ?S12:25   0:00 dovecot/imap
 


Re: dsync selectively

2015-06-18 Thread Teemu Huovila
On 06/17/2015 06:07 PM, lejeczek wrote:
 On 16/06/15 14:27, lejeczek wrote:
 On 16/06/15 14:16, lejeczek wrote:
 On 16/06/15 13:14, B wrote:
 P,

 On Tue, Jun 16, 2015 at 01:07:52PM +0100, lejeczek wrote:

 I've barely started reading on dsync and I wonder..
 would you know if it is possible to sync/replicate only specific
 domain(users)? or it's always the whole lot?
 See
 http://blog.dovecot.org/2012/02/dovecot-clustering-with-dsync-based.html

 basically set 'mail_replica' to 'remote:server3' in your userdb


 B

 thanks B,
 userdb as appose to plugin?
 it's quite unclear what to put there, to a beginner.

 also if I put mail_replica (having the rest, pretty much take form wiki in 
 repl.conf) into userdb
 I get:

 line 24: Unknown setting: mail_replica

 this userdb uses ldap driver in case it may matter, I guess it should not.

 gee, I cannot figure it out, and I'd guess it must be sort of typical 
 situation,
 where one would want to avoid replication os local/system users and only sync 
 a virtual domain(s), no?
 Can it be done by means of config files?
What the original answer meant was, that you should put it in your userdb 
backend, in this case LDAP. So add a field in LDAP,
which for users you want to replicate points to the replication destination and 
for other users is blank. then add it via a LDAP
attribute template, e.g.

user_attrs = \
   =mail_replica=%{ldap:nameOfFieldContainingReplica}

Make sure (with auth_debug=yes and mail_debug=yes in your config)the 
mail_replica is empty for users you do not want to replicate.

Please read http://wiki2.dovecot.org/AuthDatabase/LDAP/Userdb 
http://wiki2.dovecot.org/Replication?highlight=%28mail_replica%29
and http://wiki2.dovecot.org/Tools/Doveadm/Sync?highlight=%28mail_replica%29 
carefully.

br,
Teemu Huovila


Re: Additional userdb variables in passwd [was Re: Dovecot Replication - Architecture Endianness?]

2015-05-08 Thread Teemu Huovila
On 05/07/2015 02:32 PM, Reuben Farrelly wrote:
 On 7/05/2015 7:49 AM, Timo Sirainen wrote:
 On 06 May 2015, at 13:52, Reuben Farrelly reuben-dove...@reub.net wrote:

 On 4/05/2015 11:06 PM, Teemu Huovila wrote:
 Also is there a way to restrict replication users aside from a crude hack 
 around system first and last UIDs?
 You can set the userdb to return an empty mail_replica variable for users 
 you want to exclude from replication.
 http://hg.dovecot.org/dovecot-2.2/rev/c1c67bdc8752

 br,
 Teemu Huovila

 One last question.  Is it possible to achieve this with system users and 
 PAM or do I need to basically create a new static
 userdb for system users?

 You can create a new userdb passwd-file that adds extra fields. So something 
 like:

 userdb {
driver = passwd
result_success = continue-ok
 }

 userdb {
driver = passwd-file
args = /etc/dovecot/passwd.extra
skip = notfound
 }
 
 This doesn't seem to work for me and my config has that exact config. My 
 password.extra file has just one line for the one
 account I am testing with at the moment:
 
 user1:::userdb_mail_replica=tcps:lightning.reub.net:4813,userdb_mail_replica=tcp:pi.x.y:4814
 
 This breaks access for other system users such as my own account which do not 
 have entries:
 
 ay  7 21:19:06 tornado.reub.net dovecot: imap-login: Internal login failure 
 (pid=22573 id=1) (internal failure, 1 successful
 auths): user=reuben, auth-method=PLAIN, remote=2001:44b8:31d4:1311::50, 
 local=2001:44b8:31d4:1310::20, TLS
 
 which then starts soon spitting this out 10s of times per second in the mail 
 log:
 
 May  7 21:19:32 tornado.reub.net dovecot: auth-worker(23738): Error: Auth 
 worker sees different passdbs/userdbs than auth
 server. Maybe config just changed and this goes away automatically?
 
 This is with -hg latest as of now.
 
 This system uses PAM for local users.  Do I need to replicate all of the 
 system users including those who do not need any extra
 settings, in the passwd.extra file too?
 
 Is my syntax above for two mail_replica servers correct?
A bit unsure about the config syntax, so I can not advice on that, but there 
were some bugs in auth yesterday. Maybe you could
retest with f2a8e1793718 or newer. Make sure configs on both sides are in sync.

Thank you for your continued testing,
Teemu Huovila


Re: Bug#776094: dovecot-imapd: corrupts mailbox after trying to retrieve it (fwd)

2015-05-06 Thread Teemu Huovila
On 05/05/2015 05:26 PM, Santiago Vila wrote:
 I have just verified with IMAP commands. This is the procedure:
 
 telnet localhost 143
 
 and then type this:
 
 A0001 CAPABILITY
 A0002 LOGIN bluser bluser
 A0003 SELECT inbox-b
 A0004 EXPUNGE
 A0005 FETCH 1:12 RFC822.SIZE
 A0006 FETCH 1 RFC822.HEADER
 A0007 FETCH 1 BODY.PEEK[TEXT]
 A0008 STORE 1 +FLAGS (\Seen \Deleted)
 A0009 EXPUNGE
 A0010 FETCH 1 RFC822.HEADER
 A0011 FETCH 1 BODY.PEEK[TEXT]
 A0012 STORE 1 +FLAGS (\Seen \Deleted)
 A0013 EXPUNGE
 A0014 FETCH 1 RFC822.HEADER
 A0015 FETCH 1 BODY.PEEK[TEXT]
 A0016 STORE 1 +FLAGS (\Seen \Deleted)
 A0017 EXPUNGE
 A0018 FETCH 1 RFC822.HEADER
 A0019 FETCH 1 BODY.PEEK[TEXT]
 A0020 STORE 1 +FLAGS (\Seen \Deleted)
 A0021 EXPUNGE
 A0022 FETCH 1 RFC822.HEADER
 A0023 FETCH 1 BODY.PEEK[TEXT]
 A0024 STORE 1 +FLAGS (\Seen \Deleted)
 A0025 EXPUNGE
 A0026 FETCH 1 RFC822.HEADER
 A0027 FETCH 1 BODY.PEEK[TEXT]
 A0028 STORE 1 +FLAGS (\Seen \Deleted)
 A0029 EXPUNGE
 A0030 FETCH 1 RFC822.HEADER
 A0031 FETCH 1 BODY.PEEK[TEXT]
 A0032 STORE 1 +FLAGS (\Seen \Deleted)
 A0033 EXPUNGE
 A0034 FETCH 1 RFC822.HEADER
 A0035 FETCH 1 BODY.PEEK[TEXT]
 A0036 STORE 1 +FLAGS (\Seen \Deleted)
 A0037 EXPUNGE
 A0038 FETCH 1 RFC822.HEADER
 A0039 FETCH 1 BODY.PEEK[TEXT]
 A0040 STORE 1 +FLAGS (\Seen \Deleted)
 A0041 EXPUNGE
 A0042 FETCH 1 RFC822.HEADER
 A0043 FETCH 1 BODY.PEEK[TEXT]
 A0044 STORE 1 +FLAGS (\Seen \Deleted)
 A0045 EXPUNGE
 A0046 FETCH 1 RFC822.HEADER
 A0047 FETCH 1 BODY.PEEK[TEXT]
 A0048 STORE 1 +FLAGS (\Seen \Deleted)
 A0049 EXPUNGE
 A0050 FETCH 1 RFC822.HEADER
 A0051 LOGOUT
 
 After this, mbox folder inbox-b is corrupted, as the line saying
 
 From: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw...@example.com
 
 becomes
 
 rstuvwxyzabcdefghijklmnopqrstuvw...@example.com
 
 
 So: Could we please stop blaming fetchmail for this?
 It's just the messenger.
Could you also sprovide your dovecot -n output and any warnings and errors in 
dovecot logs.

br,
Teemu Huovila


Re: Bug#776094: dovecot-imapd: corrupts mailbox after trying to retrieve it (fwd)

2015-05-06 Thread Teemu Huovila
On 05/06/2015 09:57 AM, Teemu Huovila wrote:
 On 05/05/2015 05:26 PM, Santiago Vila wrote:
 I have just verified with IMAP commands. This is the procedure:

 telnet localhost 143

 and then type this:

 A0001 CAPABILITY
 A0002 LOGIN bluser bluser
 A0003 SELECT inbox-b
 A0004 EXPUNGE
 A0005 FETCH 1:12 RFC822.SIZE
 A0006 FETCH 1 RFC822.HEADER
 A0007 FETCH 1 BODY.PEEK[TEXT]
 A0008 STORE 1 +FLAGS (\Seen \Deleted)
 A0009 EXPUNGE
 A0010 FETCH 1 RFC822.HEADER
 A0011 FETCH 1 BODY.PEEK[TEXT]
 A0012 STORE 1 +FLAGS (\Seen \Deleted)
 A0013 EXPUNGE
 A0014 FETCH 1 RFC822.HEADER
 A0015 FETCH 1 BODY.PEEK[TEXT]
 A0016 STORE 1 +FLAGS (\Seen \Deleted)
 A0017 EXPUNGE
 A0018 FETCH 1 RFC822.HEADER
 A0019 FETCH 1 BODY.PEEK[TEXT]
 A0020 STORE 1 +FLAGS (\Seen \Deleted)
 A0021 EXPUNGE
 A0022 FETCH 1 RFC822.HEADER
 A0023 FETCH 1 BODY.PEEK[TEXT]
 A0024 STORE 1 +FLAGS (\Seen \Deleted)
 A0025 EXPUNGE
 A0026 FETCH 1 RFC822.HEADER
 A0027 FETCH 1 BODY.PEEK[TEXT]
 A0028 STORE 1 +FLAGS (\Seen \Deleted)
 A0029 EXPUNGE
 A0030 FETCH 1 RFC822.HEADER
 A0031 FETCH 1 BODY.PEEK[TEXT]
 A0032 STORE 1 +FLAGS (\Seen \Deleted)
 A0033 EXPUNGE
 A0034 FETCH 1 RFC822.HEADER
 A0035 FETCH 1 BODY.PEEK[TEXT]
 A0036 STORE 1 +FLAGS (\Seen \Deleted)
 A0037 EXPUNGE
 A0038 FETCH 1 RFC822.HEADER
 A0039 FETCH 1 BODY.PEEK[TEXT]
 A0040 STORE 1 +FLAGS (\Seen \Deleted)
 A0041 EXPUNGE
 A0042 FETCH 1 RFC822.HEADER
 A0043 FETCH 1 BODY.PEEK[TEXT]
 A0044 STORE 1 +FLAGS (\Seen \Deleted)
 A0045 EXPUNGE
 A0046 FETCH 1 RFC822.HEADER
 A0047 FETCH 1 BODY.PEEK[TEXT]
 A0048 STORE 1 +FLAGS (\Seen \Deleted)
 A0049 EXPUNGE
 A0050 FETCH 1 RFC822.HEADER
 A0051 LOGOUT

 After this, mbox folder inbox-b is corrupted, as the line saying

 From: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw...@example.com

 becomes

 rstuvwxyzabcdefghijklmnopqrstuvw...@example.com


 So: Could we please stop blaming fetchmail for this?
 It's just the messenger.
 Could you also sprovide your dovecot -n output and any warnings and errors 
 in dovecot logs.
Ah, found the dovecot -n earlier in the thread, but the logs would still be 
relevant.

Teemu


Re: Dovecot Replication - Architecture Endianness?

2015-05-04 Thread Teemu Huovila
On 05/03/2015 01:48 PM, Reuben Farrelly wrote:
 Hi all,
 
 I've had an interesting use case come up which - to cut the story short - one 
 way to solve the problem I am looking at may be to
 replicate a small number of mailboxes to a third remote server.
 
 I've currently had replication running between my main dovecot machine and 
 another remote VM for some time and working well (so
 I'm not new to replication and I've got a good working config), but I've a 
 need to add a third to the mix for a select number of
 mailboxes.  The arch on both of those is Gentoo x86_64 and with latest 2.1.16 
 -hg.
 
 I have attempted this so far by rsync'ing the initial Maildirs and then once 
 the bulk of the data has been transferred rely on
 dovecot's replication to keep things in sync.  I figure that this should in 
 theory mean that the subsequent updates in both
 directions are incremental and the bulk of the data gets moved while the 
 device is here on my desk using rsync.
 
 I've attempted to do this using a Raspberry Pi as a remote device, but when I 
 set it up the dovecot replication process seems to
 need to start the replication over from scratch even after the rsync is done. 
  I know this is happening as the disk utilisation
 on the Pi skyrockets once the replication starts and I end up with thousands 
 of double ups of all the mails ...  which defeats
 the entire point of the process.
 
 If I do an identical configuration but on a third Gentoo x86_64 VM locally it 
 all works as expected.  No double ups of mails and
 the catchup between the two devices is practically instant.  Same 
 filesystem even.  The only difference appears to be the
 system architecture.
 
 So main my question is this.  Is there a known architecture/endian limitation 
 on replication?   I guess cross-arch replication
 is not something many people try but is it supposed to work anyway?
I think you are bumping against Dovecot index endianess restrictions. I dont 
think cross-arch dsync can currently work very
efficiently.
http://wiki2.dovecot.org/Design/Indexes/MainIndex?highlight=%28endian%29


 Also is there a way to restrict replication users aside from a crude hack 
 around system first and last UIDs?
You can set the userdb to return an empty mail_replica variable for users you 
want to exclude from replication.
http://hg.dovecot.org/dovecot-2.2/rev/c1c67bdc8752

br,
Teemu Huovila


Re: Crashes in dovecot -hg (86f535375750)

2015-05-04 Thread Teemu Huovila
On 04/28/2015 01:42 PM, Reuben Farrelly wrote:
 Seems there is some breakage with -hg latest - 2.2.16 (86f535375750+). I've 
 just had 4 core files created in short succession on
 both servers in the replication set.  Here's the first...
Does it work with 1081d57b524e or later?

br,
Teemu Huovila

 tornado reuben # gdb /usr/libexec/dovecot/imap core
 GNU gdb (Gentoo 7.9 vanilla) 7.9
 Copyright (C) 2015 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-pc-linux-gnu.
 Type show configuration for configuration details.
 For bug reporting instructions, please see:
 http://bugs.gentoo.org/.
 Find the GDB manual and other documentation resources online at:
 http://www.gnu.org/software/gdb/documentation/.
 For help, type help.
 Type apropos word to search for commands related to word...
 Reading symbols from /usr/libexec/dovecot/imap...done.
 [New LWP 20929]
 
 warning: Could not load shared library symbols for linux-vdso.so.1.
 Do you need set solib-search-path or set sysroot?
 [Thread debugging using libthread_db enabled]
 Using host libthread_db library /lib64/libthread_db.so.1.
 Core was generated by `dovecot/imap'.
 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x7f186087693a in fts_user_free (fuser=0x0) at fts-user.c:187
 187 fts-user.c: No such file or directory.
 (gdb) bt full
 #0  0x7f186087693a in fts_user_free (fuser=0x0) at fts-user.c:187
 user_langp = 0x30008
 #1  0x7f1860876ac2 in fts_mail_user_deinit (user=0x20a3eb0)
 at fts-user.c:215
 fuser = 0x0
 #2  0x7f185d7890f8 in fts_lucene_mail_user_deinit (user=0x20a3eb0)
 at fts-lucene-plugin.c:99
 fuser = 0x20a5550
 #3  0x7f185d994e0c in replication_user_deinit (user=0x20a3eb0)
 at replication-plugin.c:310
 ruser = 0x20a5500
 #4  0x7f18615b565a in mail_user_unref (_user=0x20abc28) at mail-user.c:168
 user = 0x20a3eb0
 __FUNCTION__ = mail_user_unref
 #5  0x0041afef in client_default_destroy (client=0x20abbb0, 
 reason=0x0)
 at imap-client.c:284
 cmd = 0x7ffde3a18960
 __FUNCTION__ = client_default_destroy
 #6  0x0041ada0 in client_destroy (client=0x20abbb0, reason=0x0)
 at imap-client.c:236
 No locals.
 #7  0x0041ccf4 in client_input (client=0x20abbb0) at imap-client.c:967
 cmd = 0x7ffde3a189a0
 output = 0x0
 bytes = 12
 __FUNCTION__ = client_input
 #8  0x7f18612fc992 in io_loop_call_io (io=0x20c8610) at ioloop.c:501
 ioloop = 0x2076740
 t_id = 2
 __FUNCTION__ = io_loop_call_io
 #9  0x7f18612fec40 in io_loop_handler_run_internal (ioloop=0x2076740)
 at ioloop-epoll.c:220
 ctx = 0x2077460
 events = 0x2078290
 event = 0x2078290
 list = 0x2078e80
 io = 0x20c8610
 tv = {tv_sec = 4, tv_usec = 999387}
 events_count = 5
 msecs = 5000
 ret = 1
 i = 0
 j = 0
 call = true
 __FUNCTION__ = io_loop_handler_run_internal
 #10 0x7f18612fcb2f in io_loop_handler_run (ioloop=0x2076740)
 
 Reuben


Re: FYI: dovecot (008632bdfd2c) compilation woes, and minor glitch regarding update-version.sh

2015-05-04 Thread Teemu Huovila
On 04/24/2015 10:00 PM, Michael Grimm wrote:
 Hi —
 
 1) I'm trying to compile a recent hg dovecot version (008632bdfd2c) at a 
 FBSD10-STABLE system without success:
 
 libtool: compile:  cc -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib 
 -I../../src/lib-test -I/usr/local/include -DUDHRDIR=\../../src/lib-fts\ 
 -DDATADIR=\/usr/local/share/dovecot\ 
 -DTEST_STOPWORDS_DIR=\../../src/lib-fts\ -I/usr/local/include -std=gnu99 
 -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith 
 -Wchar-subscripts -Wformat=2 -Wbad-function-cast 
 -Wno-duplicate-decl-specifier -Wstrict-aliasing=2 -I/usr/local/include -MT 
 fts-tokenizer-generic.lo -MD -MP -MF .deps/fts-tokenizer-generic.Tpo -c 
 fts-tokenizer-generic.c  -fPIC -DPIC -o .libs/fts-tokenizer-generic.o
 fts-tokenizer-generic.c:111:18: error: use of undeclared identifier 
 'White_Space'
 if (uint32_find(White_Space, N_ELEMENTS(White_Space), c, idx))
 ^
 fts-tokenizer-generic.c:113:18: error: use of undeclared identifier 'Dash'
 if (uint32_find(Dash, N_ELEMENTS(Dash), c, idx))
 ^
 […]
 
 fts-tokenizer-generic.c:212:18: error: use of undeclared identifier 
 'MidLetter'
 if (uint32_find(MidLetter, N_ELEMENTS(MidLetter), c, idx))
 ^
 fts-tokenizer-generic.c:214:18: error: use of undeclared identifier 'MidNum'
 if (uint32_find(MidNum, N_ELEMENTS(MidNum), c, idx))
 ^
 fatal error: too many errors emitted, stopping now [-ferror-limit=]
 20 errors generated.
 Makefile:591: recipe for target 'fts-tokenizer-generic.lo' failed
 gmake[4]: *** [fts-tokenizer-generic.lo] Error 1
 gmake[4]: Leaving directory 
 '/usr/local/etc/dovecot/SOURCE/dovecot-2.2/src/lib-fts'
 
 
 2) I don't have a python binary installed, only a python2 link to the 
 python27 binary (FBSD, and python27 from ports). 
Thus, update-version.sh will fail to evaluate hg's changeset. As a quick 
 fix I needed to create a link: python - python2
Both these are only run if you compile the source from hg, as you did. Official 
release tar-balls should not have this issue.
Still, it is not optimal and Ill definitely look into solving 1) when I have 
time available for that.

For temporarily solving 1) it is worth noticing the scripts word-break-data.sh 
and word-boundary-data.sh depend on /bin/bash.
You could either install bash or just try if it works if you change it to 
/bin/sh and use whatever FreeBSD has that pointing to.

br,
Teemu Huovila


Re: SQLite does not depend on zlib, was: Re: [PATCH] Split sql drivers from lib-sql to plugins

2015-05-04 Thread Teemu Huovila
On 04/22/2015 10:19 PM, Bernd Kuhls wrote:
 Bernd Kuhls bernd.ku...@t-online.de wrote in 
 news:xnsa3df68dcaef69berndkuhlspkbjnfx...@bernd-kuhls.de:
 
 Tomas Janousek tjano...@redhat.com wrote in news:20070413132731.GA8281
 @redhat.com:

 -   SQL_LIBS=$SQL_LIBS -lsqlite3 -lz
 +   SQLITE_LIBS=$SQLITE_LIBS -lsqlite3 -lz

 Hi,

 this patch fixes a build error during cross compilation to a system without 
 the libz target package:

 --- dovecot-2.2.15.org/configure.ac 2014-10-25 05:57:08.0 +0200
 +++ dovecot-2.2.15/configure.ac 2014-11-08 10:06:23.015570150 +0100
 @@ -2293,7 +2293,7 @@
  if test $want_sqlite != no; then
 AC_CHECK_LIB(sqlite3, sqlite3_open, [
 AC_CHECK_HEADER(sqlite3.h, [
 -   SQLITE_LIBS=$SQLITE_LIBS -lsqlite3 -lz
 +   SQLITE_LIBS=$SQLITE_LIBS -lsqlite3

 AC_DEFINE(HAVE_SQLITE,, Build with SQLite3 support)
 found_sql_drivers=$found_sql_drivers sqlite

 Regards, Bernd


 
 ping ;)
 
Thank you for the report http://hg.dovecot.org/dovecot-2.2/rev/e4ad83ed88c9

br,
Teemu Huovila


Re: [patch] TLS Handshake failures can crash imap-login

2015-04-27 Thread Teemu Huovila
On 04/26/2015 10:51 PM, Hanno Böck wrote:
 On Sun, 26 Apr 2015 21:51:25 +0300
 Teemu Huovila teemu.huov...@dovecot.fi wrote:
 
 Seems the issue might require a version of libopenssl, that does not
 have support for sslv3 compiled in. I have been made aware, that we
 have a fix for Dovecot in the works.
 
 No that's not true. I have explicitely tried that.
 You just need to *disable* SSLv3, but that can be done within the
 config file.
Fair enough. So it needs to be a libopenssl, with sslv3 removed somehow. 
Conversely, a workaround for this issue would be to
enable sslv3, on the library level.

Thank you again for your report and patch,
Teemu Huovila


Re: [patch] TLS Handshake failures can crash imap-login

2015-04-26 Thread Teemu Huovila
On 04/26/2015 01:39 PM, Hanno Böck wrote:
 On Sat, 25 Apr 2015 21:36:25 +0300
 Teemu Huovila teemu.huov...@dovecot.fi wrote:
 
 I was unable to reproduce this nor the first report. Could you
 describe your environment in more detail? What version of openssl do
 you have? What is the crash message you are seeing?
 
 both openssl and dovecot latest (1.0.2a, 2.2.16) on a Gentoo.
 
 Please note that it's not dovecot itself that's crashing but
 pop3-login/imap-login. You don't note these if you haven't some kind of
 segfault reporting.
Thank you for the information.

br,
Teemu Huovila


Re: [patch] TLS Handshake failures can crash imap-login

2015-04-26 Thread Teemu Huovila
On 04/26/2015 04:07 PM, Florian Pritz wrote:
 Since there are three people involved I kindly ask you to be more
 specific as to who should provide which (exact) information.
 
 Given you ask for it right after quoting my link all I can tell you is
 that I provide all the information you ask for (openssl version, crash
 message) in the link you quoted.
Sorry if I was not clear. Ive read the link you provided and I have all the 
information I need for now.

 Where (openssl, distro, dovecot version) did you try reproducing it?
 I've asked a friend using debian or centos (don't know which) and he was
 unable to reproduce so as always they might be patching something, it
 might not affect old software or they don't link with openssl.
I tried Debain squeeze, CentOS6 and Ubuntu 1404.

Seems the issue might require a version of libopenssl, that does not have 
support for sslv3 compiled in.
I have been made aware, that we have a fix for Dovecot in the works.

br,
Teemu Huovila


Re: [patch] TLS Handshake failures can crash imap-login

2015-04-25 Thread Teemu Huovila
On 04/25/2015 11:55 AM, James wrote:
 On 24/04/2015 22:17, Hanno Böck wrote:
 
 Hello,
 
 I tracked down a tricky bug in dovecot that can cause the imap-login
 and pop3-login processes to crash on handshake failures.
 This can be tested by disabling SSLv3 in the dovecot config
 (ssl_protocols = !SSLv2 !SSLv3) and trying to connect with openssl and
 forced sslv3 (openssl s_client -ssl3 -connect localhost:995). This
 would cause a crash.
 
 Thank you for your work on this.
 
 
 I have seen that a bug that is probably rootet in this has been posted
 here before regarding ssl3-disabled configs:
 http://dovecot.org/pipermail/dovecot/2015-March/100188.html
 
 I made that earlier report.  Here is another similar report:
 
 http://dovecot.org/pipermail/dovecot/2015-April/100576.html
I was unable to reproduce this nor the first report. Could you describe your 
environment in more detail? What version of openssl
do you have? What is the crash message you are seeing?

br,
Teemu Huovila


Re: dec2str ...

2015-03-11 Thread Teemu Huovila
On 03/11/2015 03:46 PM, Hardy Flor wrote:
 a very stupid question: What reason is there for an output with printf until 
 dec2str to convert the numeric value to a string
 and not to use the format identifier %d or u%?
The length of types such as pid_t or time_t can, at least in theory, vary from 
operating system to system. Thus no length
modifier in the format can be correct on all systems.

br,
Teemu Huovila


Re: Crashes with tracebacks

2015-01-05 Thread Teemu Huovila
On 12/18/2014 02:23 PM, Timothe Litt wrote:
 Crashes, redux.  I hope I have provided all the information required for
 a solution.  Many thanks in advance for having a look.
 
 I have 71 core files for a user, that all happened in the space of about
 6 hours.  It appears that mail delivered to 'Junk E-mail' is being
 accessed.  I suspect they're all the same issue.  I saw the same syslog
 entry a while back; did a resync  enabled process dumps.  Naturally, it
 went away -- until this cluster of crashes.
 
 File system is ext3.  It is NFS mounted by other machines, but only the
 local machine should be touching the mail directories.  The user does
 not have an interactive login - it's an e-mail only account.
 
 This user's IMAP client is AppleMail.  The delivery agent is procmail;
 Junk is detected by spamassassin; clamav is also present.
The patches mentioned in http://markmail.org/message/xqu3yr52c6hjxqk2 might fix 
your issue.

You could also consider switching over to LMTP or dovecot-lda as the mail 
delivery method.

br,
Teemu Huovila


Re: error: iostream-ssl.h: No such file or directory

2014-12-21 Thread Teemu Huovila
On 12/19/2014 06:17 PM, Tobi wrote:
 Hi list
 
 I'm trying to build dovecot 2.2.15 from source on a debian wheezy (64bit). As 
 I wanted to get starttls support for dovecot's
 lmtp I got the patched files from here: 
 http://hg.dovecot.org/dovecot-2.2/rev/297192cfbd37
Please note that is not the complete lmtp starttls implementation. There have 
been fixes, at least in:
http://hg.dovecot.org/dovecot-2.2/rev/1d811ffd1832
and
http://hg.dovecot.org/dovecot-2.2/rev/ef8b7e44e96c
perhaps you should try the hg tip. The current enterprise version also has the 
lmtp starttls support, including fixes.

br,
Teemu Huovila


Re: Error: mremap_anon(###) failed: Cannot allocate memory

2014-12-12 Thread Teemu Huovila
On 12/11/2014 08:49 PM, Andy Dills wrote:
 Thanks for your suggestion. I checked the output of doveconf, and by default 
 it appears the vsz_limit is set to
 18446744073709551615B for each of the services, and 256M for 
 default_vsz_limit.
 
 I checked a user in question, and their index.cache was indeed large, 123M. 
 Seemingly needlessly so, as I deleted the dovecot
 files and reindexed, and now it's 6K.
 
 Thanks, I'll keep an eye on the users this affects and try to get their 
 index.cache in order.
Glad to hear that it is working now. In case the error reappears, please bear 
in mind that the 18446744073709551615 B
displayed in the config (Im assuming doveconf without switches) is the 
empty value, which actually means the value is not
set and the default_vsz_limit is used. 
http://wiki2.dovecot.org/Services#Service_limits

br,
Teemu Huovila


Re: devoid mailbox status for mail reloaded from a tape backup

2014-12-10 Thread Teemu Huovila
On 12/10/2014 01:54 PM, Stephen Lidie wrote:
 The reason I did not see -o is because that option is NOT documented in the 
 man pages for my dovecot installation, for whatever reason!  Either of our 
 egreps would have found it if only it had been there :(
 
 [root]# man doveadm|egrep -i '\-v'
-v Enables verbosity, including progress counter.
 [root]# man doveadm|egrep -i '\-o'
 [root]# man doveadm|egrep -i -- -o
 [root]# man doveadm|egrep -i -- -v
-v Enables verbosity, including progress counter.
 [root]# 
 
 FWIW this is CentOS 7 with dovecot installed from an RPM:
 
 [root]# yum list dovecot
 Loaded plugins: fastestmirror, langpacks, versionlock
 Loading mirror speeds from cached hostfile
  * base: linux.cc.lehigh.edu
  * epel: mirror.umd.edu
  * extras: mirror.es.its.nyu.edu
  * updates: mirrors.advancedhosters.com
 Installed Packages
 dovecot.x86_641:2.2.10-4.el7_0.1   @updates
Documentation for -o was added recently, it is not even on the man-pages of the 
2.2.15 release.

br,
Teemu Huovila


Re: Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count 0)

2014-12-10 Thread Teemu Huovila
On 12/10/2014 03:20 PM, Ralf Hildebrandt wrote:
 We're seeing this:
 
 % doveadm force-resync -u USERNAME INBOX
 
 doveadm(USERNAME): Panic: file mail-index-sync-update.c: line 250 
 (sync_expunge_range): assertion failed: (count  0)
This was probably fixed in http://hg.dovecot.org/dovecot-2.2/rev/1886e0616ab5 
(and cosmetically in
http://hg.dovecot.org/dovecot-2.2/rev/56dca338f46b). I can not say for sure 
though, since your report is lacking some details.
For future refrence, please read http://www.dovecot.org/bugreport.html 
carefully.

br,
Teemu Huovila


Re: Error: mremap_anon(###) failed: Cannot allocate memory

2014-12-08 Thread Teemu Huovila
On 12/08/2014 02:54 AM, Andy Dills wrote:
 
 We're running dovecot 2.2.15 with pigeonhole 0.4.6, in a clustered 
 environment, nfs with proxy and backend on all servers.
 
 I've been seeing some odd errors from lmtp:
 
 Error: mremap_anon(127930368) failed: Cannot allocate memory
 
 It seems to affect specific users, but it doesn't seem to manifest in any 
 particular way; no user complaints. Just the occasional log message.
A config would always be useful, but I can venture a guess. Perhaps the 
affected users have a dovecot.index.cache file
somehwere, e.g. under INBOX, that is larger than the memory limit for the lmtp 
process. Try increasing default_vsz_limit or
the service lmtp { vsz_limit }. Removing the overly large index cache file 
should also, temporarily, help. In case you do not
get this error from the imap/pop3 processes, perhaps you have already set a 
higher vsz_limit for those?

br,
Teemu Huovila


Re: v2.2.15 - make check - Conditional jump or move depends on uninitialised value

2014-12-01 Thread Teemu Huovila
On 11/30/2014 05:53 AM, AMM wrote:
 __strspn_sse42 (in /lib64/libc-2.14.90.so)
Is it possible that you are encountering this issue? 
https://bugs.kde.org/show_bug.cgi?id=270925
Either way, the error seems to stem from your libc implementation (if it is not 
the valgrind bug).

If possible, upgrade your valgrind, libc etc.

br,
Teemu Huovila


Re: v2.2.15 - make check - Conditional jump or move depends on uninitialised value

2014-12-01 Thread Teemu Huovila
On 12/01/2014 12:41 PM, AMM wrote:
 
 On Monday 01 December 2014 03:41 PM, Teemu Huovila wrote:
 But Dovecot 2.2.10 (and earlier versions) were not throwing this error.
This test was added in Dovecot version 2.2.14. It is also the only reference to 
strspn() in the whole project.

 Can I can ignore it by NOT doing make check?
I would say you can safely ignore it, but I can give no guarantee. I have no 
access to a Fedora 16 system, so I can not verify
it, but I would say this is most likely a manifestation of the valgrind bug I 
linked in my first email.

You could try verification yourself, by using the Steps to Reproduce in the 
linked issue tracker.

br,
Teemu Huovila


Re: imap-login segfaults when using post-login

2014-11-26 Thread Teemu Huovila
I hope you already found the issue on your own, but here are some pointers, 
just in case.

On 11/12/2014 07:01 PM, Nico Rittner wrote:
 imap-login: Fatal: master: service(imap-login): child 574 killed with signal 
 11 (core dumps disabled)
 imap[5523]: segfault at 14 ip b7556276 sp bfc1c940 error 4 in 
 libdovecot.so.0.0.0[b7529000+d4000]   
 
 these are the relevant sections i added:
 
 service imap-login {
executable = imap post-login
 }
The service executing the post-login should be imap, not imap-login. Please see 
http://wiki2.dovecot.org/PostLoginScripting

 service post-login {
executable = script-login /path/to/exec
 }
 
 i also used /bin/true as /path/to/exec to exclude
 the used exec itself as the reason. same result.
Testing with /bin/true will not have the expected results. As its last action,  
the post-login script needs to call exec on its
argv. In sh this would be done with 'exec $@'. Again, I refer you to the wiki 
for examples.

br,
Teemu Huovila


Re: 2.2.15 Panic in mbox_sync_read_next_mail()

2014-11-04 Thread Teemu Huovila
On 11/04/2014 01:38 PM, Matthias Egger wrote:
 
 Has someone of you just found any kind of solution to this problem?
Could the people experiencing this please send at least a) output of doveconf 
-n b) anonymized mbox content for an affected mbox
( http://www.dovecot.org/tools/mbox-anonymize.pl ). Other details can not hurt 
either.

br,
Teemu Huovila


Re: Corrupted SSL parameters file in state_dir with HG 267bca7a62fb

2014-10-31 Thread Teemu Huovila
On 10/31/2014 12:13 PM, Thomas Leuxner wrote:
 Hi,
 
 with the latest HG 267bca7a62fb the following error started to appear in the 
 logs:
 
 Oct 31 09:39:07 nihlus dovecot: master: Dovecot v2.2.15 (267bca7a62fb) 
 starting up for imap, lmtp
 [...]
 Oct 31 10:10:52 nihlus dovecot: lmtp(20876): Error: Corrupted SSL parameters 
 file in state_dir: ssl-parameters.dat - disabling SSL 360
 Oct 31 10:10:52 nihlus dovecot: lmtp(20876): Error: Couldn't initialize SSL 
 parameters, disabling SSL
 Oct 31 10:10:52 nihlus dovecot: lmtp(20876): Connect from local
 
 This most likely has been introduced with a commit after the previous build 
 installed (aa5dde56424f). I did not find options to disable SSL for LMTP 
 either, as in my setup I'm using a UNIX socket.
There seems to be an issue with setting a non-default, e.g. 2048, value for 
ssl_dh_parameters_length. A work around is to revert
to the default 1024.

Teemu


Re: Where can I find change logs/release notes for Dovecot EE releases?

2014-10-23 Thread Teemu Huovila
On 10/23/2014 12:35 AM, deoren wrote:
 I searched for them and haven't come across them yet. Could any point me in 
 the right direction? Specifically the Ubuntu 12.04
 package notes if they're split out.
On a Debian based system you should find them in 
/usr/share/doc/dovecot-ee-core/chagnelog.gz

On a Redhat based system it is /usr/share/doc/dovecot-ee-version/ChageLog

br,
Teemu Huovila


Re: Where can I find change logs/release notes for Dovecot EE releases?

2014-10-23 Thread Teemu Huovila
On 10/23/2014 10:34 AM, Teemu Huovila wrote:
 On 10/23/2014 12:35 AM, deoren wrote:
 I searched for them and haven't come across them yet. Could any point me in 
 the right direction? Specifically the Ubuntu 12.04
 package notes if they're split out.
 On a Debian based system you should find them in 
 /usr/share/doc/dovecot-ee-core/chagnelog.gz
/usr/share/doc/dovecot-ee-core/changelog.gz


Re: index problem with only 1 folder of 1 box

2014-10-10 Thread Teemu Huovila
Hello

On 10/10/2014 02:50 PM, Guillaume wrote:
 The biggest trouble for me is :
 Is it a solr problem or a dovecot problem?
 
 In my opinion, it's more a dovecot problem because the first research after a 
 solr reindex give the good answer.
If at all possible, you should try a newer version of Dovecot. There have been 
quite a few changes to FTS and the SOLR backend
since version 2.2.9. See attached log for HG log of changes. Hope this helps.

br,
Teemu Huovila
2014-09-16 15:23 +0300  Timo Sirainen  t...@iki.fi  (8c2cb7d01a78)

	* src/plugins/fts-solr/fts-backend-solr.c, src/plugins/fts/fts-api-
	private.h, src/plugins/fts/fts-api.c:
	doveadm fts rescan: For virtual namespaces just mark the last
	indexed UID to 0.

2014-09-16 14:32 +0300  Timo Sirainen  t...@iki.fi  (e82ad7f1c58f)

	* src/plugins/fts/fts-expunge-log.c:
	fts: dovecot-expunges.log wasn't closed at deinit

2014-08-08 16:27 +0300  Timo Sirainen  t...@iki.fi  (1ea3da40ea8f)

	* src/plugins/fts/fts-storage.c:
	fts: fts_no_autofuzzy shouldn't disable fuzzying when FUZZY search
	parameter is set.

2014-08-08 16:20 +0300  Timo Sirainen  t...@iki.fi  (cdf4edcc6256)

	* src/plugins/fts-lucene/fts-backend-lucene.c, src/plugins/fts-lucene
	/lucene-wrapper.cc, src/plugins/fts-lucene/lucene-wrapper.h,
	src/plugins/fts-solr/fts-backend-solr-old.c, src/plugins/fts-solr
	/fts-backend-solr.c, src/plugins/fts-squat/fts-backend-squat.c,
	src/plugins/fts/fts-api-private.h, src/plugins/fts/fts-api.c,
	src/plugins/fts/fts-api.h, src/plugins/fts/fts-search.c,
	src/plugins/fts/fts-storage.c, src/plugins/fts/fts-storage.h:
	fts: Added fts_no_autofuzzy setting to require exact matches for
	found results. This is done by using the FTS search results as only
	filters on which the regular non-FTS search is done.

2014-07-03 14:37 +0300  Timo Sirainen  t...@iki.fi  (9c6643daae98)

	* src/plugins/fts/fts-expunge-log.c:
	fts: If we detect corrupted fts expunge log, unlink it. This avoids
	the same error repeating forever.

2014-06-30 17:25 +0300  Timo Sirainen  t...@iki.fi  (2c2b94840ff3)

	* src/plugins/fts/fts-parser-tika.c:
	fts-tika: Hiden Unsupported Media Type errors. Log HTTP status
	code on errors.

2014-06-30 16:41 +0300  Timo Sirainen  t...@iki.fi  (49dfc6da1786)

	* src/plugins/fts/fts-parser-tika.c:
	fts-tika: Fixed crash if Tika returned 200 reply without payload.

2014-06-16 15:35 +0300  Timo Sirainen  t...@iki.fi  (fc40b1a6e962)

	* src/plugins/fts/xml2text.c:
	xml2text: Check for read()/write() failures and exit if they fail.

2014-06-13 02:19 +0300  Timo Sirainen  t...@iki.fi  (b67c1c9bf1a5)

	* src/auth/mech-winbind.c, src/auth/userdb-passwd-file.c, src/config
	/config-parser.c, src/doveadm/doveadm-director.c, src/doveadm
	/doveadm-dump-dbox.c, src/doveadm/doveadm-log.c, src/doveadm
	/doveadm-penalty.c, src/doveadm/doveadm-replicator.c, src/doveadm
	/doveadm-stats.c, src/doveadm/doveadm-who.c, src/doveadm/doveadm-
	zlib.c, src/lib-compression/test-compression.c, src/lib-imap-urlauth
	/imap-urlauth-connection.c, src/lib-lda/smtp-client.c, src/lib-
	master/master-instance.c, src/lib-master/mountpoint-list.c, src/lib-
	settings/settings-parser.c, src/lib-settings/settings.c, src/lib-
	storage/index/cydir/cydir-mail.c, src/lib-storage/index/dbox-common
	/dbox-file.c, src/lib-storage/index/imapc/imapc-save.c, src/lib-
	storage/index/maildir/maildir-mail.c, src/lib-storage/index/raw/raw-
	storage.c, src/lib-storage/list/subscription-file.c, src/lib
	/iostream-temp.c, src/lib/istream-seekable.c, src/plugins/fts/fts-
	parser-script.c, src/plugins/zlib/zlib-plugin.c,
	src/replication/replicator/replicator-queue.c, src/ssl-
	params/main.c, src/util/rawlog.c:
	Use the new [io]_stream_create_fd_*autoclose() functions wherever
	possible.

2014-06-13 01:11 +0300  Timo Sirainen  t...@iki.fi  (54f1beb8d071)

	* src/plugins/fts/doveadm-dump-fts-expunge-log.c:
	fts: Improved doveadm fts dump for corrupted expunge log Although we
	may still be trying to allocate up to 2 GB of memory, but at least
	no more than that now. Found by Coverity

2014-06-13 00:46 +0300  Timo Sirainen  t...@iki.fi  (0fc86de05ccf)

	* src/plugins/fts/doveadm-dump-fts-expunge-log.c:
	fts: Minor code cleanup: Don't increment NULL pointer.

2014-05-27 21:17 +0300  Phil Carmody  p...@dovecot.fi  (ad028a950248)

	* src/plugins/fts/fts-parser-html.c:
	fts: parser-html - parser can fail on attributes='with values in
	single quotes' If that value were to contain an odd number of double
	quotes, then the HTML_STATE_TAG_(D)QUOTED state would be entered and
	not exited.

	The two quoting types behave basically the same, so just add two new
	cases and duplicate the state transition code.

2014-05-27 21:17 +0300  Phil Carmody  p...@dovecot.fi  (54e508b71dcd)

	* src/plugins/fts/fts-parser-html.c:
	fts: parser-html - parse_tag_name returns wrong value for comments
	This function returns 1 more than the number of additional
	characters to be swallowed up by the state transition.

2014-05-27 21:17 +0300  Phil

Re: Trouble getting listescape plugin to work with $ separator (as demonstrated in Wiki) in Dovecot 2.2.9

2014-10-03 Thread Teemu Huovila
On 10/02/2014 05:23 PM, Ben Johnson wrote:
 Now, the only problem I see is that when I attempt to create a new
 folder beneath the Inbox (whether it contains a . or not), the folder
 appears at the root-level of the IMAP account, at the same level as the
 Inbox itself. The folder name is INBOX.My Folder.
 
 Also, if I try to select the folder and view its contents, I receive the
 error, Mailbox doesn't exist: INBOX.My Folder. But this may simply be
 a product of a misconfiguration on my part.
Namespace configuration can be a bit difficult. I urge you to read the wiki 
page on namespaces carefully and test which
configuration works with your mail clients. It might be as easy, as renaming 
the namespace you have now to inbox, eg:
namespace INBOX {
inbox = yes
location =
prefix =
separator = $
type = private
}

If that is not what you meant, or does not work for your clients, try with 
several namespaces and setting the prefix in them.

br,
Teemu Huovila


Re: Question wrt. dovecot replicator

2014-10-02 Thread Teemu Huovila
On 10/02/2014 02:40 AM, Remko Lodder wrote:
 and a mail_replica = tcp:host{a,b}:12346 configuration on each host so that 
 they are pointing to eachother; This seems to work fine for most accounts, 
 for example: I never experienced issues with this. However, several other 
 accounts (with a large variety of clients) got duplicated emails. Looking 
 with doveadm I only noticed that the numbers of the messages are closely 
 related to eachother but one number incremented. So they cannot be deleted 
 with the deduplicator function.
 
 The replication is provided over TCP only, the connection streams over an 
 OpenVPN tunnel so that the contents are protected, the machines are located 
 in different Datacenters but close to eachother.
 
 How can I determine why there are duplicated emails?
 What kind of messages should I specifically look for?
Look for any errors and warnings in the Dovecot log. You could also enable 
mail_debug (ref.
http://wiki2.dovecot.org/Logging#Logging_verbosity) for the accounts being 
synced. Also, please post your complete configuration.

 Can I set this up for a few selected accounts instead of all accounts like it 
 was currently? To make sure I do not make things worse for others then needs 
 to be :-)
 The service had been disabled for the time being to prevent the other users 
 from getting duplicated emails.
I do not know what kind of userdb you are running, but there is a newish patch 
that enables per user replication via the
mail_replica setting. It is not yet included in the newest (2.2.13) release of 
Dovecot, but is available via the enterprise
version. There are no FreeBSD builds for that, though. ref: 
http://hg.dovecot.org/dovecot-2.2/rev/c1c67bdc8752

br,
Teemu Huovila


Re: Trouble getting listescape plugin to work with $ separator (as demonstrated in Wiki) in Dovecot 2.2.9

2014-10-02 Thread Teemu Huovila
On 10/01/2014 09:43 PM, Ben Johnson wrote:
 Ultimately, I have two questions:
 
 1.) Is the nesting structure that I've employed correct? The Wiki 2 page
 is not clear with regard to the nesting; is it correct to put the
 namespace block inside the protocol imap block, as I demonstrated above?
I think it would be better, if you put the namespace configuration at the top 
level. See
http://master.wiki2.dovecot.org/Namespaces or 10-mail.conf and 
15-mailboxes.conf, that are located in the
doc/example-config/conf.d/ directory of the Dovecot sources, for more examples.

 2.) Is it possible to escape the dollar sign so that it can be used as
 the separator?
There was a mistake in the wiki. You should quote the $ like this: $.

br,
Teemu Huovila


Re: doveadm with multiple instances on same machine(s)

2014-09-19 Thread Teemu Huovila
On 09/19/2014 03:04 AM, Will Yardley wrote:
 Couple questions about running doveadm with multiple instances... I have
 Dovecot 2.2.13 on RHEL6 running across 3 boxes, each with a director and
 main instance running. When I try to lookup something on the main
 instance (which is handling user auth) via its auth-userdb socket
 directly, I get an error:
 
 # doveadm auth lookup -a /var/run/dovecot-main/auth-userdb myuser
 doveadm(root): Error: passdb lookup failed for myuser: Configured passdbs 
 don't support crentials lookups
 
 When I use the default lookup map, I just get the proxy settings that
 are configured in the director instance's authdb.
 # doveadm auth lookup myuser
 passdb: myuser
   user  : myuser
   proxy : y
   nopassword: y
 
 In addition,
 doveadm director map
 
 can't map the username -I get the error:
 doveadm(root): Error: User listing returned failure
 doveadm(root): Error: user listing failed
 [then I get the whole list, but with unknown for each user]
Assuming your configuration is otherwise ok, I think this was fixed in
http://hg.dovecot.org/dovecot-2.2/rev/8b5664bce4a0 and
http://hg.dovecot.org/dovecot-2.2/rev/ccc5701dae72
so it will be included in Dovecot 2.2.14

 
 The director itself doesn't have the LDAP passdb that the main dovecot
 instance talks to, but I have, in the director config:
 
 service doveadm {
   inet_listener {
 port = 8889
   }
 }
 director_doveadm_port = 8889
 
 local 192.168.x.x/24 {
   doveadm_password = XX
 }
 
 doveadm_proxy_port = 
In the 2.2 series you can write this as doveadm_port, I think.

br,
Teemu Huovila


Re: Replication problem

2014-09-10 Thread Teemu Huovila
On 09/10/2014 02:04 AM, Vincent ETIENNE wrote:
 After some digging, the problem is this 600 seconds timeout that in my
 case is unsuffisant to transfer one big mail. So retry and ..; same
 result.. and again and again
 
 I have verify with strace that data is exchange continuously during the
 sync between the two host but i can't succed in uploading the file
 during that time.
 
 Is there a way to configure this timeout ?
 
 Eventually a manual sync with a  larger timeout  to restore replication
 before limiting maximum size in postfix ?
 
 Possibly a feature would be to have a shorter timeout but applied to the
 transmission  ( ie. nothing receive during 30 sec = timeout )
 or a timeout compuited base on size ( ie. 300 sec for 10 mo for example)
 
 Any help appreciated
Currently there is no way to change it at run time. As a quick fix, if you 
compile your own Dovecot, you could try modifying
DSYNC_IBC_STREAM_TIMEOUT_MSECS in src/doveadm/dsync/dsync-ibc-stream.c . I 
think that is the timeout you are bumping up against.

br,
Teemu Huovila


Re: Replication problem

2014-09-10 Thread Teemu Huovila
On 09/10/2014 01:49 PM, Vincent ETIENNE wrote:
 Le 10/09/2014 11:56, Teemu Huovila a écrit :
Currently there is no way to change it at run time. As a quick fix, if you 
compile your own Dovecot, you could try modifying
 DSYNC_IBC_STREAM_TIMEOUT_MSECS in src/doveadm/dsync/dsync-ibc-stream.c . I 
 think that is the timeout you are bumping up against.

 Thanks will try and keep you inform of the result. May take some time (
 i am not compiling dovecot now )
 Really thanks because for now my replication is broken and so mail are
 not receive for some user depending on
 the instance of dovecot they connect
Cancel that advice. Timo did a change that should make changing the timeout by 
hand unnecessary. If you can, try running Dovecot
with this patch http://hg.dovecot.org/dovecot-2.2/rev/647162da8423. There 
should be no time outs, even for large mails.

Do you get any error messages, when there is a timeout?

br,
Teemu Huovila


Re: preserving flags for shared mailbox when migrating from cyrus to dovecot

2014-09-10 Thread Teemu Huovila
Hello

On 09/10/2014 02:20 PM, Jogi Hofmüller wrote:
 again no success.  The shared mailbox stays available and working but
 the flags will not be synced to the state they had on the original
 server.  I also tried it without -R but that didn't get me anywhere
 either and should be wrong anyways AFAICT.
 
 Any further ideas anyone or should I prepare our shared mailbox users
 that all their email will be unread after migration?
I looked at the dovecot -n output attached to your previous mail and I think I 
spotted some issues.
namespace {
  hidden = no
  inbox = no
  list = children
  location =
maildir:/srv/vmail/%%u/Maildir:INDEX=/srv/vmail/%u/shared/%%u:CONTROL=/srv/vmail/%u/shared/%%u:INDEXPVT=/srv/vmail/%u/shared/%%u
  prefix = shared.%%u.
  separator = .
  subscriptions = yes
  type = shared
}

The INDEX and INDEXPVT are identical, which means there is no private index. 
Having the CONTROL defined is also questionable. I
suggest you try defining location like this:

location = maildir:/srv/vmail/%%u/Maildir:INDEXPVT=/srv/vmail/%u/shared/%%u

Also, to make subscriptions work sensibly, set the shared namespace 
subscriptions = no and then add a placeholder namespace with
an empty prefix to contain just the private subscriptions:

namespace {
  prefix =
  hidden = yes
  list = no
  subscriptions=yes
}

Please read 
http://wiki2.dovecot.org/SharedMailboxes/Public?highlight=%28INDEXPVT%29#Maildir:_Per-user_.2BAFw-Seen_flag
 for
further details.

br,
Teemu Huovila


Re: how to profiling imap process with valgrind

2014-09-05 Thread Teemu Huovila
Hello

On 09/04/2014 10:53 PM, morrison wrote:
 I want to profile runtime performance of imap and pop3 processes. There 
 processes are forked by dovecot master process. I am wondering if there is a 
 way I can profile these processes with valgrind. I tried service = 
 /bin/valgrind /bin/imap but this did not work.

You could try:

service imap {
  executable = /usr/bin/valgrind --num-callers=50 --leak-check=full -q 
DOVECOT-PATH-PREFIX-HERE/libexec/dovecot/imap
}

service pop3 {
  executable = /usr/bin/valgrind --num-callers=50 --leak-check=full -q 
DOVECOT-PATH-PREFIX-HERE/libexec/dovecot/pop3


Fix the paths for valgrind and Dovecot libexec/ to match your system. Depending 
on your distribution and library versions you
may also want to add --suppressions for some external libraries. Valgrind 
output will be in Dovecot error log.

br,
Teemu Huovila


Re: Dovecot Enterprize repository access

2014-09-05 Thread Teemu Huovila
Hello

On 08/28/2014 04:57 PM, Alessandro Bono wrote:
 I'm using Enterprise repository for centos 6 and works perfectly
 but upgrading packages there is not changelog or other info
 I don't have idea what's changed on every update
 
 Can you post somewhere changelog info or include in rpms?

A ChangeLog should be found under /usr/share/doc/dovecot-ee directory. The 
RPMs themselves do not yet have a changelog as such.

br,
Teemu Huovila


Re: Dovecot Enterprize repository access

2014-08-28 Thread Teemu Huovila
Hello

On 08/28/2014 02:35 PM, Spyros Tsiolis wrote:
 in regards to the Enterprise repository access :
 
 1. There's no version of v7.x for CentOS
There are indeed not yet CentOS 7 nor Ubuntu 14.04 packages available. Work on 
those builds is ongoing, but I can not say when
they will be officially supported.

 2. There's no download section anywhere
There should be some instructions visible on the Download tab of the
http://shop.dovecot.fi/home/8-dovecot-ee-repository-access.html item. 
Basically, the repository access requires going through
the purchase process, to obtain the access credentials.

br,
Teemu Huovila


Re: Problems with dovecot 2.2.13 and monit

2014-08-20 Thread Teemu Huovila
On 08/17/2014 11:56 PM, Marius wrote:
 Teemu Huovila teemu.huovila at dovecot.fi writes:
 

 On 06/16/2014 03:35 PM, Hanno Böck wrote: = the problem is caused by 
 dovecot 2.2.13 bug ... its
 behaviour is
 inconsistent (LOGOUT in non-authenticated state works per RFC
 requirement if no SSL is used and doesn't conform to RFC if SSL is
 used). It is possible that the problem is related to their DoS-attack
 modification, which has most probably unexpected side-effect.
 This was fixed in commits
 http://hg.dovecot.org/dovecot-2.2/rev/09d3c9c6f0ad
 and
 http://hg.dovecot.org/dovecot-2.2/rev/7129fe8bc260

 so it will work better in the next release.

 br,
 Teemu Huovila


 
 Hello, 
 
 I am having the same problem with dovecot 2.0.9 on CentOS
 
 I manually tested over ssl (imap, 993) and if the connection is 
 authenticated i get the bye reply after I issue logout and connection ends 
 gracefully.
 
 If I fail authentication on purpose and issue logout afterwards, then the 
 connection gets terminated abruptly.
 
 Any way to fix this?
The fixes in question are not applied to the 2.0 tree. Furthermore you are not 
even running the latest release from the 2.0
series, so the fixes for Dovecot might be out of the question, unless you make 
similar fixes to the version you are running.

One way forward might be to alter the way monit does the monitoring. I got a 
success on the ssl port, when using the following
monit configuration snippet (tested with dovecot 2.2 hg tip and monit github 
tip. Obviously you have to change localhost and
the login credentials to whatever matches your config. It also requires plain 
auth. On the plus side, you get to see if your
authentication backend is up and running.

if failed host localhost port 993 type tcpssl sslauto and
expect  ^\* OK.* Dovecot ready.
send a login test pass \r\n
expect ^a OK.* Logged in
send a logout\r\n
expect ^\* BYE Logging out\r\na OK Logout completed.
then alert

br,
Teemu


Re: Problems with dovecot 2.2.13 and monit

2014-06-16 Thread Teemu Huovila
On 06/16/2014 03:35 PM, Hanno Böck wrote: = the problem is caused by dovecot 
2.2.13 bug ... its behaviour is
 inconsistent (LOGOUT in non-authenticated state works per RFC
 requirement if no SSL is used and doesn't conform to RFC if SSL is
 used). It is possible that the problem is related to their DoS-attack
 modification, which has most probably unexpected side-effect.
This was fixed in commits
http://hg.dovecot.org/dovecot-2.2/rev/09d3c9c6f0ad
and
http://hg.dovecot.org/dovecot-2.2/rev/7129fe8bc260

so it will work better in the next release.

br,
Teemu Huovila


Re: [Dovecot] Conditional jump or move depends on uninitialised value

2014-05-23 Thread Teemu Huovila

On 05/23/2014 04:05 PM, Daminto Lie wrote:
 Hi,
 
 My Server runs on Ubuntu Server 12.04 LTS 32 bits.
 
 I'm getting the following error messages when I run make check during the 
 compilation of dovecot-2.2.13.
This is a known issue  with an external library (zlib). We opted not to include 
the valgrind suppressions in the dovecot source.
To silence the error, execute the following in the top direcotory of dovecot 
source:
cat  EOF  ./run-test-valgrind.supp
{
   squeezy-zlib-uninitialized
   Memcheck:Cond
   fun:inflateReset2
   fun:inflateInit2_
   fun:i_stream_zlib_init
   fun:i_stream_create_zlib
   fun:test_compression_handler
   fun:test_compression
   fun:test_run_funcs
   fun:test_run
   fun:main
}
EOF

We use this on squeezy, but i think the call stack should be the same.

br,
Teemu Huovila


 
 
 snip
 ==2058== Conditional jump or move depends on uninitialised
 value(s)
 ==2058==at 0x4049DD8: inflateReset2 (in
 /lib/i386-linux-gnu/libz.so.1.2.3.4)
 ==2058==by 0x4049EC7: inflateInit2_ (in
 /lib/i386-linux-gnu/libz.so.1.2.3.4)
 ==2058==by 0x804AFEF: i_stream_zlib_init
 (istream-zlib.c:320)
 ==2058==by 0x804B122:
 i_stream_create_zlib (istream-zlib.c:475)
 ==2058==by 0x804AA18:
 test_compression_handler (test-compression.c:72)
 ==2058==by 0xEFCDAB88: ???
 ==2058==
 make[2]: *** [check-test] Error 1
 make[2]: Leaving directory
 `/usr/src/dovecot-2.2.13/src/lib-compression'
 make[1]: *** [check-recursive] Error 1
 make[1]: Leaving directory `/usr/src/dovecot-2.2.13/src'
 make: *** [check-recursive] Error 1
 Any help would be greatly appreciated.
 
 Thank you
 


Re: [Dovecot] Dovecot Replication setup

2014-04-17 Thread Teemu Huovila
On 04/17/2014 03:05 PM, Nikolaos Milas wrote:
 After further testing, I can now say that I was wrong; Both masters must be 
 configured for replication to have proper two way sync.
 
 I wish someone -with earlier experience- would answer these questions, to 
 help us avoid all this fuss
I regret that I did not spot the error in your configuration and Im sorry 
nobody else was able to answer you either. The dsync
feature (as it is used since v2.2) is one where perhaps not so many have a lot 
of production environment experience. I hope you
will have an easier time from now on.

br,
Teemu Huovila


Re: [Dovecot] Segmentation fault running doveadm index (lucene) on a big mailbox

2014-04-15 Thread Teemu Huovila
On 04/15/2014 04:35 PM, Florian Klink wrote:
 Hi,
 
 on a server running dovecot 2.2.12 I have a user with a quite big
 mailbox (~37000 Mails in the INBOX).
 
 I tried to enable full text search using the fts_lucene backend (dovecot
 2.2.12).
This patch (to be included in 2.2.13) addresses a different Lucene error, but 
might mitigate your issue as well.
http://hg.dovecot.org/dovecot-2.2/rev/d63b209737be

If the issue remains and if possible, install dovecot dbg packages to get 
symbols and run a bt full instead of bt.

br,
Teemu Huovila


Re: [Dovecot] dsync deleted my mailbox - what did I do wrong?

2014-04-08 Thread Teemu Huovila
Hello

Many different dsync issues have come up in this thread. Ill try to answer them 
as best as I can.

1) dsync backup -R
The conclusion reached in the thread was correct. Instead of the backup option, 
doveadm import would be better suited for
merging old mails into an existing mailbox.

2) Maildir + INBOX + backup/sync/replicate
In the test scenarios where the INBOX on one side was to be completely removed, 
e.g. doveadm backup -R the dsync failed and
nothing was synced to the target. This is because before moving the source 
mails to the mailbox, dsync cleans out the old ones (
-R preserves nothing) and in Maildir the INBOX can not be removed. This is a 
feature/not easily solvable, because in Maildir
INBOX is different from other folders.

3) dsync replication / doveadm sync not working as expected.
These came in pretty late in the thread and I did not get a full picture of 
what kind of setups and parameters were used. I
suspect these might be a configuration issue. I think trying with different 
configurations and going through the documentation,
such as it is, once more, is your best bet. Use -D and -v to make dsync more 
verbose, so you do not miss any error messages.

br,
Teemu Huovila


Re: [Dovecot] dsync deleted my mailbox - what did I do wrong?

2014-04-08 Thread Teemu Huovila
On 04/08/2014 03:00 PM, Nikolaos Milas wrote:
 Neither using replication nor using dsync from CLI leads to subfolders 
 getting replicated, as I have explained. As an example,
 if a user creates subfolder boxtest e.g. under Inbox on either side, it 
 never gets created on the other side.
I cant find any errors, but I might be missing something obvious. I only have a 
few suggestions for things to check.

1) You listed the config for one host (vmail i assume). Is the configuration 
similar on the vmail1 side? Especially, can the
command dsync -u user find the correct location for the users mails?

2) For the replicator plugin scenario, does doveadm have access to auth, i.e. 
does doveadm user '*' work on both sides?

3) Are the dovecot instances running on different hosts (dovecot --hostdomain 
is different)?

4) Instead of dsync mirror, try using the v2.2 syntax doveadm sync. Also, i 
_think_ you need to execute dsync-server on the
other side, so your full command becomes:
doveadm sync -u imaptester ssh -l root vmail1.example.com doveadm dsync-server 
-u imaptester
Sadly, there is no man-page for doveadm sync yet.

br,
Teemu Huovila


Re: [Dovecot] dsync replication questions

2014-04-07 Thread Teemu Huovila
On 04/07/2014 12:22 PM, Simon Fraser wrote:
 Thank you.  Is it still only the changes that are synced each way, or
 the entire mailbox? I'm trying to gauge the performance hit for enabling
 this on larger mailboxes. (I could, of course, run some tests, but
 someone may already have done that)
Cant say anything certain on this one. I do know that not all the messages are 
sent to the other side. There are optimizations
in place, using the Dovecot transaction logs and some pretty complicated 
application login, but a lot of data still needs to be
processed by the dsync brains.

I think testing with your particular setup and data would give the most 
accurate results.

br,
Teemu Huovila


Re: [Dovecot] Crash in pop3 with version 2.2.12

2014-04-04 Thread Teemu Huovila
Hello

On 04/04/2014 11:18 AM, Axel Luttgens wrote:
 I'm still wondering... under which circumstances could the crash occur?
This issue occurs whenever the function 
src/pop3/pop3-commands.c:client_uidls_save() is called.
The function is called when:

The pop3 internal structure client-message_uidls_save is 1. This in turn 
happens when any of these is true:
1. pop3 logoutformat has %u
2. config setting pop3_uidl_duplicates is not the default allow
3. config setting pop3_save_uidl=yes

The problem manifests in two different ways.
1) When the zlib plugin is active the executable crashed due to a segmentation 
fault.
2) If there is no zlib, the data returned by the UIDL command is off-by-one 
and the last data item is null.

Without zlib the error might look something like this:
C:uidl
S:+OK
S:1 0002533553b6
S:2 0003533553b6
S:3 0004533553b6
S:4 0005533553b6
S:5 0006533553b6
S:6 (null)
S:.

 Hence the question: to patch or not to patch?
Patch, if your setup will need to meet any of the three criteria triggering the 
issue, before 2.2.13 is released.

br,
Teemu Huovila


  1   2   >