Re: dovecot 2.3.5 - tests fail: http payload echo (ssl)

2019-03-08 Thread Stephan Bosch via dovecot




Op 07/03/2019 om 23:03 schreef Helmut K. C. Tessarek:

Thank you very much for getting back to me.

On 2019-03-07 04:23, Stephan Bosch via dovecot wrote:

If that
doesn't fail for some reason, can you run it within valgrind?

I ran the following command:

valgrind --log-file=valgrind_test-http-payload.out
src/lib-http/test-http-payload -D

But I guess the valgrind.log itself is useless to you, and the debug output is
printed to the terminal, so I kindliy ask you to be a bit more specific. What
exact output do you need or better yet, which command do you want me to run.


Since you're compiling it anyway, maybe you should first try to increase 
the CLIENT_PROGRESS_TIMEOUT in src/lib-http/test-http-payload.c. It is 
currently 10 seconds.


Regards,

Stephan.




Re: dovecot 2.3.5 - tests fail: http payload echo (ssl)

2019-03-08 Thread Helmut K. C. Tessarek via dovecot


On 2019-03-08 18:35, Stephan Bosch wrote:
> Op 08/03/2019 om 23:49 schreef Helmut K. C. Tessarek:
>>
>> On 2019-03-07 17:03, Helmut K. C. Tessarek via dovecot wrote:
 Does this work?:

 NOVALGRIND=yes make -C src/lib-http check
>>> Yes, no errors.
>>>
>> Does this mean dovecot will run properly?
> 
> Yes. This is test is just a bit unreliable it seems.

Ok, thanks for the info. I'm gonna add NOVALGRIND=yes to my test section then.

Cheers,
  K. C.


-- 
regards Helmut K. C. Tessarek  KeyID 0x172380A011EF4944
Key fingerprint = 8A55 70C1 BD85 D34E ADBC 386C 1723 80A0 11EF 4944

/*
   Thou shalt not follow the NULL pointer for chaos and madness
   await thee at its end.
*/



signature.asc
Description: OpenPGP digital signature


Re: dovecot 2.3.5 - tests fail: http payload echo (ssl)

2019-03-08 Thread Stephan Bosch via dovecot




Op 08/03/2019 om 23:49 schreef Helmut K. C. Tessarek:


On 2019-03-07 17:03, Helmut K. C. Tessarek via dovecot wrote:

Does this work?:

NOVALGRIND=yes make -C src/lib-http check

Yes, no errors.


Does this mean dovecot will run properly?


Yes. This is test is just a bit unreliable it seems.

Regards,

Stephan.


Re: dovecot 2.3.5 - tests fail: http payload echo (ssl)

2019-03-08 Thread Helmut K. C. Tessarek via dovecot


On 2019-03-07 17:03, Helmut K. C. Tessarek via dovecot wrote:
>> Does this work?:
>>
>> NOVALGRIND=yes make -C src/lib-http check
> Yes, no errors.
> 

Does this mean dovecot will run properly?

Cheers,
 K. C.

-- 
regards Helmut K. C. Tessarek  KeyID 0x172380A011EF4944
Key fingerprint = 8A55 70C1 BD85 D34E ADBC 386C 1723 80A0 11EF 4944

/*
   Thou shalt not follow the NULL pointer for chaos and madness
   await thee at its end.
*/



signature.asc
Description: OpenPGP digital signature


imap-hibernate not working

2019-03-08 Thread Marcelo Coelho via dovecot
Hi,

I follow different setup instructions and I can't make imap-hibernate work. 
I've tried vmail and dovecot as users, tried to set mode to 0666, without 
success. I'm using FreeBSD 11.2.

Is imap-hibernate compatible with FreeBSD 11.2?



My operational system:

# uname -v
FreeBSD 11.2-RELEASE-p9 #0: Tue Feb  5 15:30:36 UTC 2019 
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC 

Here are my logs:

Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): Error: 
kevent(-1) for notify remove failed: Bad file descriptor
Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): Error: 
close(-1) for notify remove failed: Bad file descriptor
Mar  8 15:30:57 servername dovecot: imap-hibernate: Error: Failed to parse 
client input: Invalid peer_dev_minor value: 18446744073709486335
Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): Error: 
/opt/dovecot/2.3.5/var/run/dovecot/imap-hibernate returned failure: Failed to 
parse client input: Invalid peer_dev_minor value: 18446744073709486335
Mar  8 15:30:57 servername dovecot: imap: Error: 

Here are my processes:

# ps aux | grep imap
dovenull 682640.0  0.0   12012   8292  -  S10:15   0:00.06 
imap-login:  (imap-login)
dovenull 682740.0  0.0   12012   8272  -  S10:15   0:00.06 
imap-login:  (imap-login)
dovenull 682750.0  0.0   12012   8356  -  S10:15   0:00.11 
imap-login:  (imap-login)
dovenull 682760.0  0.0   12012   8468  -  S10:15   0:00.16 
imap-login:  (imap-login)
dovenull 682770.0  0.0   12012   8328  -  S10:15   0:00.09 
imap-login:  (imap-login)
dovenull 682780.0  0.0   12012   8368  -  S10:15   0:00.11 
imap-login:  (imap-login)
dovenull 682790.0  0.0   12012   8272  -  S10:15   0:00.05 
imap-login:  (imap-login)
dovenull 682800.0  0.0   12012   8288  -  S10:15   0:00.06 
imap-login:  (imap-login)
dovenull 682810.0  0.0   12012   8316  -  S10:15   0:00.08 
imap-login:  (imap-login)
dovenull 682820.0  0.0   12012   8312  -  S10:15   0:00.10 
imap-login:  (imap-login)
dovenull 682830.0  0.0   12012   8356  -  S10:15   0:00.14 
imap-login:  (imap-login)
dovenull 682860.0  0.0   12012   8312  -  S10:15   0:00.08 
imap-login:  (imap-login)
dovenull 682870.0  0.0   14060   9024  -  S10:15   0:01.31 
imap-login: [0 pre-login + 2 TLS proxies] (imap-login)
dovenull 682880.0  0.0   12012   8392  -  S10:15   0:00.24 
imap-login: [11.11.11.11 TLS proxy] (imap-login)
dovenull 682890.0  0.0   14060   9116  -  S10:15   0:01.75 
imap-login: [11.11.11.11 TLS proxy] (imap-login)
dovenull 682900.0  0.0   14060   9208  -  S10:15   0:01.84 
imap-login: [0 pre-login + 4 TLS proxies] (imap-login)
nobody   684750.0  0.0   14404   6020  -  I10:19   0:00.08 
/usr/local/sbin/in.imapproxyd -f /usr/local/etc/imapproxyd.conf
vmail761080.0  0.0   21612  17700  -  S14:30   0:00.12 imap: 
[u...@domain.com 11.11.11.11 IDLE] (imap)
vmail763320.0  0.0   21536  17612  -  I14:41   0:00.16 imap: 
[u...@domain.com 11.11.11.11 IDLE] (imap)
vmail771270.0  0.0   21516  17396  -  S15:17   0:00.07 imap: 
[u...@domain.com 11.11.11.11] (imap)
vmail771700.0  0.0   21516  17380  -  I15:19   0:00.06 imap: 
[u...@domain.com 11.11.11.11] (imap)
vmail774380.0  0.0   21516  17396  -  S15:31   0:00.05 imap: 
[u...@domain.com 11.11.11.11] (imap)
vmail775360.0  0.0   21548  17400  -  S15:35   0:00.05 imap: 
[u...@domain.com 11.11.11.11 IDLE] (imap)
vmail775390.0  0.0   21516  17256  -  I15:35   0:00.03 imap: 
[u...@domain.com 11.11.11.11] (imap)
vmail775940.0  0.08892   4760  -  S15:38   0:00.01 
imap-hibernate: [0 connections] (imap-hibernate)
vmail776200.0  0.0   21516  17256  -  I15:39   0:00.03 imap: 
[u...@domain.com 11.11.11.11] (imap)


My build options:

Install prefix . : /opt/dovecot/2.3.5
File offsets ... : 64bit
I/O polling  : kqueue
I/O notifys  : kqueue
SSL  : yes (OpenSSL)
GSSAPI . : no
passdbs  : static passwd passwd-file pam checkpassword sql
CFLAGS . : -std=gnu99 -O2 -pipe -fno-strict-aliasing 
-fstack-protector-strong -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wall -W 
-Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts 
-Wformat=2 -Wbad-function-cast -Wno-duplicate-decl-specifier 
-Wstrict-aliasing=2 
 : -shadow -bsdauth -sia -ldap -vpopmail
userdbs  : static prefetch passwd passwd-file checkpassword sql
 : -ldap -vpopmail
SQL drivers  : mysql
 : -pgsql -sqlite -cassandra
Full text search : squat lucene
 : -solr


Configure command:

./configure '--prefix=/opt/dovecot/2.3.5' --without-docs '--without-shadow' 
'--without-docs' 

Re: Dovecot v2.3.5 released

2019-03-08 Thread Juan C. Blanco via dovecot




On 08/03/2019 11:18, Aki Tuomi via dovecot wrote:


On 7.3.2019 23.37, A. Schulze via dovecot wrote:


Am 07.03.19 um 17:33 schrieb Aki Tuomi via dovecot:


test-http-client-errors.c:2989: Assert failed: FALSE
connection timed out . : FAILED

Hello Aki,


Are you running with valgrind or on really slow system?

I'm not aware my buildsystem use valgrind ...

How do you define "a really slow system"?
All I can mention as reference is a build time of 11 minutes until the error 
occur.


Does it happen if you run env NOVALGRIND=yes make check?

yes,

Andreas


The assertion occurs because it seems to take too long to complete the
test, that's why it's asserting FALSE (see the comment above the line).
Can you run the test with strace and provide strace output?



Hi, I have this same error building dovecot 2.3.5 + pigeonhole 0.5.5 in 
a debian system, payload tests seems to be passed, I've run the test 
with sctrace and this is the output (just after the last "ok" test)


Plese, let me know yf you need the full strace output and I'll send an URL.

Regards
Juan C. Blanco

% uname -a
Linux druida 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 
GNU/Linux


write(1, "connection lost after 100-contin"..., 76connection lost after 
100-continue ... : ok

) = 76
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 5
setsockopt(5, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(5, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
bind(5, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(41571), 
sin_addr=inet_addr("127.0.0.1")}, [16]) = 0

listen(5, 128)  = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 6
setsockopt(6, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
bind(6, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockname(6, {sa_family=AF_INET, sin_port=htons(33519), 
sin_addr=inet_addr("127.0.0.1")}, [16]) = 0

listen(6, 128)  = 0
clone(child_stack=NULL, 
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0x7feb580a69d0) = 336

close(5)= 0
clone(child_stack=NULL, 
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0x7feb580a69d0) = 337

close(6)= 0
nanosleep({tv_sec=0, tv_nsec=1}, NULL) = 0
epoll_create(128)   = 5
fcntl(5, F_GETFD)   = 0
fcntl(5, F_SETFD, FD_CLOEXEC)   = 0
nanosleep({tv_sec=0, tv_nsec=0}, NULL)  = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 6
setsockopt(6, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
fcntl(6, F_GETFL)   = 0x2 (flags O_RDWR)
fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK)= 0
connect(6, {sa_family=AF_INET, sin_port=htons(41571), 
sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in 
progress)
epoll_ctl(5, EPOLL_CTL_ADD, 6, {EPOLLOUT|EPOLLERR|EPOLLHUP, 
{u32=858285952, u64=94911045592960}}) = 0
epoll_wait(5, [{EPOLLOUT, {u32=858285952, u64=94911045592960}}], 1, 
1) = 1

epoll_ctl(5, EPOLL_CTL_DEL, 6, 0x7ffd3310d74c) = 0
getsockopt(6, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
fstat(6, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0
fcntl(6, F_GETFL)   = 0x802 (flags O_RDWR|O_NONBLOCK)
lseek(6, 0, SEEK_CUR)   = -1 ESPIPE (Illegal seek)
getsockname(6, {sa_family=AF_INET, sin_port=htons(42206), 
sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0
epoll_ctl(5, EPOLL_CTL_ADD, 6, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, 
{u32=858285952, u64=94911045592960}}) = 0

setsockopt(6, SOL_TCP, TCP_NODELAY, [1], 4) = 0
epoll_wait(5, [], 1, 0) = 0
setsockopt(6, SOL_TCP, TCP_CORK, [1], 4) = 0
write(6, "GET /connection-lost-sub-ioloop."..., 175) = 175
setsockopt(6, SOL_TCP, TCP_CORK, [0], 4) = 0
epoll_wait(5, [{EPOLLIN, {u32=858285952, u64=94911045592960}}], 1, 2000) = 1
read(6, "HTTP/1.1 200 OK\r\nContent-Length:"..., 8192) = 38
epoll_create(128)   = 7
fcntl(7, F_GETFD)   = 0
fcntl(7, F_SETFD, FD_CLOEXEC)   = 0
epoll_ctl(7, EPOLL_CTL_ADD, 6, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, 
{u32=858284832, u64=94911045591840}}) = 0

epoll_ctl(5, EPOLL_CTL_DEL, 6, 0x7ffd3310d4fc) = 0
epoll_wait(7, [], 1, 0) = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 8
setsockopt(8, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(8, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
fcntl(8, F_GETFL)   = 0x2 (flags O_RDWR)
fcntl(8, F_SETFL, O_RDWR|O_NONBLOCK)= 0
connect(8, {sa_family=AF_INET, sin_port=htons(33519), 
sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in 
progress)
epoll_ctl(7, EPOLL_CTL_ADD, 8, {EPOLLOUT|EPOLLERR|EPOLLHUP, 
{u32=858330688, u64=94911045637696}}) = 0

Suggestions for dovecot-submission (XCLIENT NAME attribute)

2019-03-08 Thread Marcelo Coelho via dovecot
Hi,

Currently, Dovecot Submission doesn't forward NAME attribute when XCLIENT is 
enable:

Received: from server.example.org (localhost [1.2.3.4])
(Authenticated sender: t...@example.org)
by localhost (Postfix) with ESMTPA id AA
for ; Fri,  8 Mar 2019 12:28:02

Postfix adds a header with the sender IP address (1.2.3.4), but with localhost 
as hostname.

It would be great if Dovecot Submission send NAME attribute, with the resolved 
hostname of the sender IP (ADDR attribute).


Thank you!


--
Marcelo Coelho



Re: Upgrading to 2.3

2019-03-08 Thread @lbutlr via dovecot
On 8 Mar 2019, at 05:54, Aki Tuomi via dovecot  wrote:
> https://wiki.dovecot.org/Upgrading

Duh. I wasn't looking for a URL that was specific.


-- 
These are the thoughts that kept me out of the really good schools. --
George Carlin



Sieve scripts not triggered on IMAP inbound messages using IMAPC

2019-03-08 Thread Fred via dovecot
Good morning.

 

I am using Dovecot as an IMAP proxy -- using IMAPC -- and the system is
working great. Thank you for contributing such great software to the
community!

 

I'm using Dovcot 2.3.5 and Pigeonhole 0.5.5.

 

My server is located behind a content-filtering firewall that will mark
incoming messages by adding "[SPAM]" to the subject line. I'm trying to use
Sieve to identify those messages and move them into the Spam folder. I have
a basic Sieve script that looks like this:

 

   require ["fileinto"];

   if anyof (header :contains "Subject" "[SPAM]",

 header :contains "X-Spam-Flag" "Yes")

   {

   fileinto "Spam";

   stop;

   }

 

I can see in the logs that Sieve is loaded, and also the imapsieve_plugin

 

   Debug: sieve: Pigeonhole version 0.5.5 (2483b085) initializing

   Debug: sieve: Sieve imapsieve plugin for Pigeonhole version 0.5.5
(2483b085) loaded

 

Sieve detects outbound messages (from the log: Debug: imapsieve: mailbox
INBOX.Sent: APPEND event) but nothing registers for inbound, and my
spam_rule is never triggered. Any suggestions?

 

Here is the output from "Dovecot -n":

 

# 2.3.5 (513208660): /etc/dovecot/dovecot.conf

# Pigeonhole version 0.5.5 (2483b085)

# OS: Linux 3.10.0-957.5.1.el7.x86_64 x86_64 CentOS Linux release 7.6.1810
(Core)  xfs

# Hostname: 

auth_master_user_separator = *

auth_mechanisms = PLAIN LOGIN

auth_verbose_passwords = plain

deliver_log_format = from=%{from}, envelope_sender=%{from_envelope},
subject=%{subject}, msgid=%m, size=%{size}, %$

dict {

  acl = pgsql:/etc/dovecot/dovecot-share-folder.conf

  quotadict = pgsql:/etc/dovecot/dovecot-used-quota.conf

}

first_valid_uid = 2000

imapc_features = fetch-bodystructure fetch-headers

imapc_host = 

imapc_port = 993

imapc_ssl = imaps

imapc_ssl_verify = no

last_valid_uid = 2000

listen = * [::]

mail_debug = yes

mail_gid = 2000

mail_home = /var/mail/imapc/home/%d/%n

mail_location = imapc:/var/mail/imapc/%d/%n

mail_plugins = mailbox_alias mail_log notify

mail_uid = 2000

managesieve_notify_capability = mailto

managesieve_sieve_capability = fileinto reject envelope encoded-character
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date index ihave
duplicate mime foreverypart extracttext imapsieve vnd.dovecot.imapsieve

namespace inbox {

  inbox = yes

  list = yes

  location =

  prefix =

  separator = .

  type = private

}

passdb {

  args = host=

  default_fields = userdb_imapc_user=%u
userdb_imapc_password=#hidden_use-P_to_show#

  driver = imap

}

passdb {

  args = /etc/dovecot/dovecot-master-users

  driver = passwd-file

  master = yes

}

plugin {

  imapsieve_default = /var/vmail/sieve/global/spam_rule.sieve

  mail_log_events = delete undelete expunge mailbox_delete mailbox_rename

  mail_log_fields = uid box msgid size from subject

  mailbox_alias_new = Sent Messages

  mailbox_alias_new2 = Sent Items

  mailbox_alias_old = Sent

  mailbox_alias_old2 = Sent

  sieve_global_dir = /var/vmail/sieve/global/

  sieve_max_redirects = 30

  sieve_plugins = sieve_imapsieve

  sieve_trace_debug = yes

  sieve_trace_dir = /var/log/dovecot

  sieve_trace_level = matching

}

protocols = imap sieve

service auth {

  unix_listener /var/spool/postfix/private/dovecot-auth {

group = postfix

mode = 0666

user = postfix

  }

  unix_listener auth-master {

group = vmail

mode = 0666

user = vmail

  }

  unix_listener auth-userdb {

group = vmail

mode = 0660

user = vmail

  }

}

service dict {

  unix_listener dict {

group = vmail

mode = 0660

user = vmail

  }

}

service imap-login {

  process_limit = 500

  service_count = 1

}

service lmtp {

  executable = lmtp -L

  inet_listener lmtp {

address = 127.0.0.1

port = 24

  }

  process_min_avail = 5

  unix_listener /var/spool/postfix/private/dovecot-lmtp {

group = postfix

mode = 0600

user = postfix

  }

  user = vmail

}

service managesieve-login {

  inet_listener sieve {

address = 127.0.0.1

port = 4190

  }

}

ssl = required

ssl_cert = 

Re: Removing a mailbox from a dovecot cluster

2019-03-08 Thread Gerald Galster via dovecot


> Am 08.03.2019 um 14:46 schrieb Francis :
> 
> Le jeu. 7 mars 2019 à 17:03, Gerald Galster via dovecot  > a écrit :
>   
> Why does "doveadm replicator status " not return?
> 
> I've tried the following commands on a replicated server (2.2.33.2), which 
> all returned immediately:
> 
> [root@server ~]# doveadm replicator status u...@domain.com 
> 
> username priority fast sync full sync success sync failed
> u...@domain.com   none 12:30:32  12:30:32  
> 12:30:31 - 
> 
> [root@server ~]# doveadm replicator remove u...@domain.com 
> 
> 
> [root@server ~]# doveadm replicator status u...@domain.com 
> 
> username priority fast sync full sync success sync failed
>  (no additional output as replication is stopped)
> 
> 
> Hi,
> 
> Sorry, I didn't express myself correctly. The output you see is the one I see 
> too when the replication is off. When I say it doesn't return anything, I 
> wanted to say it doesn't print anything else other than the header line. The 
> replication has been disabled for more than 12 hours and I just sent an email 
> to this account and the replication switched on by itself again. I see 
> nothing about the replication in the logs.


Maybe someone from the dovecot team can tell more about the internals on why 
and when replication is activated again.

I'm solving that problem at the mta level: the email is inserted into postfix' 
access table where access is denied, so emails won't reach dovecot.
Maybe you can do something similar or set a flag in ldap that your mta can use 
to reject mails.

Best regards
Gerald

imap segfault in libc.so with CLucene FTS backend enabled

2019-03-08 Thread Alexander Miroshnichenko via dovecot

Steps to reproduce:

- Enable CLucene FTS in Dovecot;
- Open mailbox with MUA;
- Search for message with any text;
- IMAP session crash.

OS: Gentoo Base System release 2.6

Version:
FTS: dev-cpp/clucene-2.3.3.4-r6
IMAP: net-mail/dovecot-2.3.2.1
LIBC: sys-libs/musl-1.1.21

Dovecot FTS config:

plugin {
 fts = lucene
 fts_lucene = whitespace_chars=@. normalize no_snowball
 fts_autoindex=yes
 fts_autoindex_max_recent_msgs=80
 fts_index_timeout=90
}

dmesg:
[260150.192294] imap[18221]: segfault at 6578772cca98 ip 63e7f1b10397 
sp 7945d5822970 error 6 in libc.so[63e7f1ae8000+a4000]
[260150.192316] Code: 0f 84 44 02 00 00 48 39 ca 0f 84 62 02 00 00 48 8b 43 
08 48 89 4a 10 48 89 51 18 48 89 c2 48 83 e0 fe 48 83 ca 01 48 89 53 08 
<48> 83 0c 03 01 41 8b 07 48 8d 6b 10 85 c0 0f 84 68 ff ff ff 31 c0


bt full:
Core was generated by `dovecot/imap'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  unbin (i=0, c=0x1908553de10) at src/malloc/malloc.c:195
195 src/malloc/malloc.c: No such file or directory.
(gdb) bt full
#0  unbin (i=0, c=0x1908553de10) at src/malloc/malloc.c:195
No locals.
#1  malloc (n=, n@entry=4) at src/malloc/malloc.c:320
   mask = 
   c = 0x1908553de10
   i = 0
   j = 0
#2  0x63e7f1b4984f in wcsdup (s=0x63e7ed7d0c58 L"") at 
src/string/wcsdup.c:7

   l = 0
   d = 
#3  0x63e7eda98308 in lucene::index::Term::Term (this=0x1908553df80) at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/Term.cpp:26

No locals.
#4  0x63e7edad5f25 in 
lucene::index::SegmentTermEnum::readTerm(lucene::index::Term*) () at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/SegmentTermEnum.cpp:351

   start = 1
   length = 4
   totalLength = 5
   field = 
   fieldname = 0x1908553d180 L"\142\157\144\171"
#5  0x63e7edad5f7c in lucene::index::SegmentTermEnum::next 
(this=0x19085524460) at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/SegmentTermEnum.cpp:180

   tmp = 
   this = 0x19085524460
   tmp = 
   tmp = 
#6  0x63e7edad5be9 in lucene::index::SegmentTermEnum::scanTo 
(this=this@entry=0x19085524460, term=term@entry=0x7945d5822dc0)
   at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/SegmentTermEnum.cpp:218

No locals.
#7  0x63e7edad959c in lucene::index::TermInfosReader::scanEnum 
(this=, term=term@entry=0x7945d5822dc0)
   at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/TermInfosReader.cpp:422

   enumerator = 0x19085524460
#8  0x63e7edad96a4 in lucene::index::TermInfosReader::get 
(this=, term=term@entry=0x7945d5822dc0) at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/TermInfosReader.cpp:246

   enumerator = 
#9  0x63e7edab9071 in lucene::index::SegmentReader::docFreq 
(this=0x19085500ae0, t=0x7945d5822dc0) at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/index/SegmentReader.cpp:518

   ti = 
#10 0x63e7edae2620 in lucene::search::Similarity::idf 
(this=0x19085526e60, term=0x7945d5822dc0, searcher=0x190855007a0)
   at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/search/Similarity.cpp:184

No locals.
#11 0x63e7edaeda51 in 
lucene::search::TermWeight::TermWeight(lucene::search::Searcher*, 
lucene::search::TermQuery*, lucene::index::Term*) ()
   at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/search/TermQuery.cpp:117

No locals.
#12 0x63e7edaeda99 in 
lucene::search::TermQuery::_createWeight(lucene::search::Searcher*) () at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/search/TermQuery.cpp:240

No locals.
#13 0x63e7edafa2dc in 
lucene::search::BooleanWeight::BooleanWeight(lucene::search::Searcher*, 
lucene::util::CLVectorlucene::util::Deletor::Object >*, 
lucene::search::BooleanQuery*) () at 
/usr/lib/gcc/x86_64-gentoo-linux-musl/8.2.0/include/g++-v8/bits/stl_vector.h:930

   i = 1
   i = 
#14 0x63e7edafa351 in 
lucene::search::BooleanQuery::_createWeight(lucene::search::Searcher*) () 
at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/search/BooleanQuery.cpp:66

No locals.
#15 0x63e7edaef926 in lucene::search::Query::weight 
(this=this@entry=0x7945d5822da0, searcher=searcher@entry=0x190855007a0)
   at 
/var/tmp/portage/dev-cpp/clucene-2.3.3.4-r6/work/clucene-core-2.3.3.4/src/core/CLucene/search/SearchHeader.cpp:121

   query = 
   weight = 
   sum = 
   norm = 
#16 0x63e7edaf0cd7 in 
lucene::search::IndexSearcher::_search(lucene::search::Query*, 
lucene::search::Filter*, int) ()
   at 

Re: Removing a mailbox from a dovecot cluster

2019-03-08 Thread Francis via dovecot
Le jeu. 7 mars 2019 à 17:03, Gerald Galster via dovecot 
a écrit :

> Why does "doveadm replicator status " not return?
>
> I've tried the following commands on a replicated server (2.2.33.2), which
> all returned immediately:
>
> [root@server ~]# doveadm replicator status u...@domain.com
> username priority fast sync full sync success sync failed
> u...@domain.com  none 12:30:32  12:30:32  12:30:31 -
>
> [root@server ~]# doveadm replicator remove u...@domain.com
>
> [root@server ~]# doveadm replicator status u...@domain.com
> username priority fast sync full sync success sync failed
>  (no additional output as replication is stopped)
>
>
Hi,

Sorry, I didn't express myself correctly. The output you see is the one I
see too when the replication is off. When I say it doesn't return anything,
I wanted to say it doesn't print anything else other than the header line.
The replication has been disabled for more than 12 hours and I just sent an
email to this account and the replication switched on by itself again. I
see nothing about the replication in the logs.

-- 
Francis


Re: Upgrading to 2.3

2019-03-08 Thread Aki Tuomi via dovecot


On 8.3.2019 14.46, @lbutlr via dovecot wrote:
> I haven't upgraded to dovecot 2.3 yet, but am looking into doing it. I 
> thought there was a link to some of the issues and changes you needed to make 
> to your configuration to go from 2.2 to 2.3, but now I cannot find it.
>
>
https://wiki.dovecot.org/Upgrading

Aki



Upgrading to 2.3

2019-03-08 Thread @lbutlr via dovecot
I haven't upgraded to dovecot 2.3 yet, but am looking into doing it. I thought 
there was a link to some of the issues and changes you needed to make to your 
configuration to go from 2.2 to 2.3, but now I cannot find it.


-- 
Science is the foot that kicks magic square in the nuts.



AD ldap, filter to exclude various kinds of expired, disabled etc etc users

2019-03-08 Thread mj via dovecot

Hi,

I was revising our AD ldap user_filter and pass_filter to exclude more 
types of expired / disabled accounts.


I started adding things like:


(&(objectclass=person)(sAMAccountName=%n)(!useraccountcontrol=514)(!(useraccountcontrol=546))(!(useraccountcontrol=66050))(!(useraccountcontrol=8388608)))


but then I thought, why not simply do:


(&(objectclass=person)(sAMAccountName=%n)(userAccountControl=512))


as 512 would your regular active user accounts only, excluding all other 
account types.


Looking here 
(https://support.microsoft.com/en-gb/help/305144/how-to-use-useraccountcontrol-to-manipulate-user-account-properties) 
there are some many different userAccountControl to check, that it might 
be smarter to only allow userAccountControl=512, or?


Any ideas on this..?

(or examples of how you do it?)

MJ


Re: rsync /old/server new/server ?

2019-03-08 Thread Voytek Eymont via dovecot
On Fri, March 8, 2019 11:17 pm, Aki Tuomi via dovecot wrote:

>>
> You could configure old server's MTA to deliver mail to new server's MTA
> instead? After there has been activity on the new server, running rsync
> from old server can cause problems.


yes, I see, thanks (hmmm, done it before, hope I remember how...)


V



Re: rsync /old/server new/server ?

2019-03-08 Thread Voytek Eymont via dovecot
On Fri, March 8, 2019 10:45 pm, Aki Tuomi via dovecot wrote:
>

> On 8.3.2019 13.44, Voytek Eymont via dovecot wrote:
>
>> I have Centos 7 with dovecot/postfix/mysql Maildir
>>
>>
>> I want to bring in a new server, new server will have same hostname as
>> current, but, different IP
>>
>> I was intending to
>> rsync -avzhe ssh  vmail@oldserver:/var/vmail/vmail1  /var/vmail/vmail1
>>
>> and, then, re run as necessary when/if mail still arrives on old server
>>
>>
>> is that "a good plan" ...?
>>
>> TIA, V
>>
>>
> Yeah, it's ok as long as you haven't switched to new server.

Aki, thanks

I'm not certain I understand your comment properly

I currently have server 'geko'

intention was to change A record to assign geko to new IP/server, assign
new hostname to oldserver A record

I guess I;ll still get some emails delivered to oldserver with new
hostname/old ip for 24 hours or so after aA record change ?

so, I was going to re run rsync 2 or 3 times over the first 24 hours or so ?

are you saying NOT to rsync once A record changed ?

sorry for being slow

(sorry for sending not to list...)



Re: rsync /old/server new/server ?

2019-03-08 Thread Aki Tuomi via dovecot


On 8.3.2019 14.07, Voytek Eymont wrote:
> On Fri, March 8, 2019 10:45 pm, Aki Tuomi via dovecot wrote:
>> On 8.3.2019 13.44, Voytek Eymont via dovecot wrote:
>>
>>> I have Centos 7 with dovecot/postfix/mysql Maildir
>>>
>>>
>>> I want to bring in a new server, new server will have same hostname as
>>> current, but, different IP
>>>
>>> I was intending to
>>> rsync -avzhe ssh  vmail@oldserver:/var/vmail/vmail1  /var/vmail/vmail1
>>>
>>> and, then, re run as necessary when/if mail still arrives on old server
>>>
>>>
>>> is that "a good plan" ...?
>>>
>>> TIA, V
>>>
>>>
>> Yeah, it's ok as long as you haven't switched to new server.
> Aki, thanks
>
> I'm not certain I understand your comment properly
>
> I currently have server 'geko'
>
> intention was to change A record to assign geko to new IP/server, assign
> new hostname to oldserver A record
>
> I guess I;ll still get some emails delivered to oldserver with new
> hostname/old ip for 24 hours or so after aA record change ?
>
> so, I was going to re run rsync 2 or 3 times over the first 24 hours or so ?
>
> are you saying NOT to rsync once A record changed ?
>
> sorry for being slow
>
>
You could configure old server's MTA to deliver mail to new server's MTA
instead? After there has been activity on the new server, running rsync
from old server can cause problems.

Aki



Re: rsync /old/server new/server ?

2019-03-08 Thread Aki Tuomi via dovecot


On 8.3.2019 13.44, Voytek Eymont via dovecot wrote:
> I have Centos 7 with dovecot/postfix/mysql Maildir
>
> I want to bring in a new server, new server will have same hostname as
> current, but, different IP
>
> I was intending to
> rsync -avzhe ssh  vmail@oldserver:/var/vmail/vmail1  /var/vmail/vmail1
>
> and, then, re run as necessary when/if mail still arrives on old server
>
> is that "a good plan" ...?
>
> TIA, V
>
Yeah, it's ok as long as you haven't switched to new server.

Aki



rsync /old/server new/server ?

2019-03-08 Thread Voytek Eymont via dovecot
I have Centos 7 with dovecot/postfix/mysql Maildir

I want to bring in a new server, new server will have same hostname as
current, but, different IP

I was intending to
rsync -avzhe ssh  vmail@oldserver:/var/vmail/vmail1  /var/vmail/vmail1

and, then, re run as necessary when/if mail still arrives on old server

is that "a good plan" ...?

TIA, V



Re: Dovecot v2.3.5 released

2019-03-08 Thread Aki Tuomi via dovecot


On 7.3.2019 23.37, A. Schulze via dovecot wrote:
>
> Am 07.03.19 um 17:33 schrieb Aki Tuomi via dovecot:
>
>>> test-http-client-errors.c:2989: Assert failed: FALSE
>>> connection timed out . : 
>>> FAILED
> Hello Aki,
>
>> Are you running with valgrind or on really slow system?
> I'm not aware my buildsystem use valgrind ...
>
> How do you define "a really slow system"?
> All I can mention as reference is a build time of 11 minutes until the error 
> occur.
>
>> Does it happen if you run env NOVALGRIND=yes make check?
> yes, 
>
> Andreas

The assertion occurs because it seems to take too long to complete the
test, that's why it's asserting FALSE (see the comment above the line).
Can you run the test with strace and provide strace output?

Aki