Stan,
On 1/20/11 7:45 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
What you're supposed to do, and what VMWare recommends, is to run ntpd _only
in
the ESX host_ and _not_ in each guest. According to:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displ
Stan,
On 1/14/11 7:09 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
The average size of an email worldwide today is less than 4KB, less than one
typical filesystem block.
28TB / 4KB = 28,000,000,000,000 bytes / 4096 bytes = 6,835,937,500 =
6.8 billion emails / 5,000 users =
1,367,188
On 1/14/11 8:59 PM, Brandon Davidson brand...@uoregon.edu wrote:
I work for central IS, so this is the first stage of a consolidated service
offering that we anticipate may encompass all of our staff and faculty. We
bought what we could with what we had, anticipating that usage will grow
over
Stan,
On 11/8/10 10:39 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
However, if CONFIG_HZ=1000 you're generating WAY too many interrupts/sec
to the timer, ESPECIALLY on an 8 core machine. This will exacerbate the
high context switching problem. On an 8 vCPU (and physical CPU) machine
Stan,
On 11/1/10 7:30 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
1. How many of you have a remote site hot backup Dovecot IMAP server?
+1
2. How are you replicating mailbox data to the hot backup system?
C. Other
Netapp Fabric MetroCluster, active IMAP/POP3 nodes at both sites
Timo,
On 10/28/10 5:13 AM, Timo Sirainen t...@iki.fi wrote:
. list (subscribed) *
* LIST (\Subscribed \NonExistent) /
Shared/tester2/sdfgsg/gsdfgf/vtyjyfgj/rtdhrthxs/zhfhg
. OK List completed.
Looks like a bug, yeah. Should be fixed in v2.0. I don't know if it's worth
the trouble
Timo,
I'm working with a webmail client that periodically polls unread message
counts for a list of folders. It currently does this by doing a LIST or LSUB
and then iterating across all of the folders, running a SEARCH ALL UNSEEN,
and counting the resulting UID list.
Eventually I'd like to see
Timo,
On 10/17/10 3:56 PM, Timo Sirainen t...@iki.fi wrote:
The reason why STATUS is mentioned to be possibly slow is to discourage
clients from doing a STATUS to all mailboxes.
STATUS is definitely faster than SELECT+SEARCH with all IMAP servers.
That's what I figured, thanks! Other than
Timo,
On 10/17/10 4:20 PM, Timo Sirainen t...@iki.fi wrote:
On 18.10.2010, at 0.19, Brandon Davidson wrote:
Other than actually calling THREAD and
counting the resulting groups, is there a good way to get a count of
threads?
Nope, that's the only way.
It looks like draft-ietf-morg
Chris,
On 10/6/10 9:42 PM, Chris Hobbs cho...@nhusd.k12.ca.us wrote:
3) Modified my NFS mount with noatime to reduce i/o hits there. Need to
figure out what Brad's suggestions about readahead on the server mean.
It's been a while since I mucked with Linux as a NFS server, I've been on
Netapp
Michael,
On 9/1/10 12:18 AM, Michael M. Slusarz slus...@curecanti.org wrote:
imapproxy *should* really be using UNSELECT, but that looks like a
different (imapproxy) bug.
I run imapproxy too. If you're using Dovecot 2.0, set:
imap_capability = +UNSELECT IDLE
Imapproxy is naive and only reads
Noel,
On 8/26/10 11:28 PM, Noel Butler noel.but...@ausics.net wrote:
I just fail to see why adding more complexity, and essentially making
$9K load balancers redundant, is the way of the future.
To each their own. If your setup works without it, then fine, don't use
it... but I don't see why
Noel,
On 8/26/10 9:59 PM, Noel Butler noel.but...@ausics.net wrote:
I fail to see advantage if anything it add in more point of failure, with
i agree with this and it is why we dont use it
we use dovecots deliver with postfix and have noticed no problems, not
to say there was none, but
Timo,
On 7/19/10 9:38 AM, Timo Sirainen t...@iki.fi wrote:
http://hg.dovecot.org/dovecot-2.0/rev/f178792fb820 fixes it?
It makes it further before crashing. Trace attached.
I still wonder why it's timing out in the first place. Didn't you change it
to reset the timeout as long as it's still
Timo,
Just out of curiosity, how are incoming connections routed to login
processes when run with:
service imap-login { service_count = 0 }
I've been playing with this on our test director, and the process connection
counts look somewhat unbalanced. I'm wondering if there are any performance
Timo,
On 7/16/10 4:23 AM, Timo Sirainen t...@iki.fi wrote:
Jul 16 01:50:44 cc-popmap7 dovecot: auth: Error: auth worker: Aborted
request: Lookup timed out
Jul 16 01:50:44 cc-popmap7 dovecot: master: Error: service(auth): child 1607
killed with signal 11 (core dumps disabled)
I don't
Timo,
On 7/17/10 11:06 AM, Timo Sirainen t...@iki.fi wrote:
Here's a stack trace. Standard null function pointer. No locals, I think I'd
have to recompile to get additional information.
#0 0x in ?? ()
#1 0x00415a71 in auth_worker_destroy ()
#2 0x00415416
Timo,
Maybe this fixes it: http://hg.dovecot.org/dovecot-2.0/rev/cfd15170dff7
Nope, still crashes with the same stack. I'll rebuild with -g and report
back.
Here we go. Attached, hopefully Entourage won't mangle the line wrap.
-Brad
auth-worker-gdb.txt
Description: Binary data
Timo,
On 7/15/10 4:12 PM, Timo Sirainen t...@iki.fi wrote:
Maybe there could be a parameter to get the user list from a file (one
username per line) instead of userdb.
Added -f parameter for this.
Awesome! I dumped a userlist (one username per line) which it seems to read
through quite
Timo,
On 7/15/10 4:18 PM, Timo Sirainen t...@iki.fi wrote:
Jul 15 13:46:24 cc-popmap7 dovecot: auth: Error: auth worker: Aborted
request: Lookup timed out
Jul 15 13:53:25 cc-popmap7 dovecot: auth: Error: getpwent() failed: No such
file or directory
Also see if
I've got a couple more issues with the doveadm director interface:
1) If I use doveadm director remove to disable a host with active users,
the director seems to lose track of users mapped to that host. I guess I
would expect it to tear down any active sessions by killing the login
proxies, like
On 7/13/10 4:53 AM, Timo Sirainen t...@iki.fi wrote:
Hmm. Between? Is it doing CAPABILITY before or after login or both? That
anyway sounds different from the idle timeout problem..
I added some additional logging to imapproxy and it looks like it's actually
getting stuck in a few
Timo,
On 7/11/10 12:06 PM, Timo Sirainen t...@iki.fi wrote:
Pretty much anything built into Dovecot would be an improvement over an
external script from my point of view.
Yeah, some day I guess..
Well, I would definitely make use of it if you ever get around to coding it.
With a script I
Timo,
On 7/11/10 10:58 AM, Timo Sirainen t...@iki.fi wrote:
dsync in hg tip is failing tests:
Fixed now, as well as another dsync bug.
Looks good!
New doveadm director status is a little odd though. The 'mail server ip'
column is way wide (I guess it adjusts to term size though?) and the
dsync in hg tip is failing tests:
test-dsync-brain.c:176: Assert failed:
test_dsync_mailbox_create_equals(box_event.box, src_boxes[6])
test-dsync-brain.c:180: Assert failed:
test_dsync_mailbox_create_equals(box_event.box, dest_boxes[6])
Segmentation fault
I'm currently using rev 77f244924009,
Leander,
On 7/10/10 2:14 PM, Leander S. leander.schae...@googlemail.com wrote:
You have attempted to establish a connection with server. However,
the security certificate presented belongs to *.server. It is
possible, though unlikely, that someone may be trying to intercept your
communication
On 7/9/10 12:01 AM, Xavier Pons xavier.p...@uib.es wrote:
I think this new funcionalities would be perfect (necessary ;-) ) for a
complete load balanced/high availability mail system.
Timo, what you described sounds great.
Pretty much anything built into Dovecot would be an improvement over
Xavier,
On 7/8/10 1:29 AM, Xavier Pons xavier.p...@uib.es wrote:
Yes, we will have two hardware balancers in front of proxies. Thus, the
director service will detect failures of backend servers and not forward
sessions them? how detects if a backend server it's alive or not?
IIRC, it does
Timo,
On 6/24/10 4:23 AM, Timo Sirainen t...@iki.fi wrote:
I'd recommend also installing and configuring imapproxy - it can be
beneficial with squirrelmail.
Do you have any about a real world numbers about installation with and without
imapproxy?
We run imapproxy behind our Roundcube
On 6/2/10 7:33 PM, Timo Sirainen t...@iki.fi wrote:
I wonder if they can stand up to 10k+ concurrent proxied
connections though?
I'd think so.
I could probably give that a try, but I'll have a hard time convincing folks
to do that until after 2.0 has out of beta for a bit. Maybe after summer
Pascal,
On 5/31/10 11:40 PM, Pascal Volk user+dove...@localhost.localdomain.org
wrote:
I've spent some time for the fine manual. Whats new?
Location: http://hg.localdomain.org/dovecot-2.0-man
So I don't have to flood the wiki with attachments.
As soon as the manual pages are complete, they
Timo,
After straightening out some issues with Axel's spec file, I'm back to
poking at this.
On 5/25/10 3:14 PM, Timo Sirainen t...@iki.fi wrote:
So instead of having separate proxies and mail servers, have only hybrids
everywhere? I guess it would almost work, except proxy_maybe isn't yet
Timo,
On 5/31/10 6:04 AM, Timo Sirainen t...@iki.fi wrote:
Well .. maybe you could use separate services. Have the proxy listen on
public IP and the backend listen on localhost. Then you can do:
local_ip 127.0.0.1 {
passdb {
..
}
}
and things like that. I think it would work,
Timo,
On 5/31/10 4:13 PM, Timo Sirainen t...@iki.fi wrote:
You need to put the other passdb/userdb to the external IP:
local 1.2.3.4 {
userdb {
driver = passwd
}
passdb {
driver = sql
args = /etc/dovecot/proxy-sqlite.conf
}
}
It still doesn't seem to work. I tried this, with
Timo,
On 5/31/10 4:36 PM, Timo Sirainen t...@iki.fi wrote:
The passdbs and userdbs are checked in the order they're defined. You could
add them at the bottom. Or probably more easily:
local 128.223.143.138 {
passdb {
driver = sql
args = ..
}
passdb {
driver = pam
}
Timo,
On 5/31/10 5:09 PM, Timo Sirainen t...@iki.fi wrote:
Right .. it doesn't work exactly like that I guess. Or I don't remember :)
Easiest to test with:
doveconf -f lip=128.223.142.138 -n
That looks better:
[r...@cc-popmap7 ~]# doveconf -f lip=128.223.142.138 -h |grep -B1 -A7 passdb
}
Timo,
On 5/31/10 5:34 PM, Brandon Davidson brand...@uoregon.edu wrote:
Still not sure why it's not proxying though. The config looks good but it's
still using PAM even for the external IP.
I played with subnet masks instead of IPs and using remote instead of local,
as well as setting
Timo,
On 5/31/10 6:56 PM, Timo Sirainen t...@iki.fi wrote:
Oh, you're right. For auth settings currently only protocol blocks work. It
was a bit too much trouble to make local/remote blocks to work. :)
That's too bad! Any hope of getting support for this and
director+proxy_maybe anytime
Axel,
On 5/30/10 10:22 AM, Axel Thimm axel.th...@atrpms.net wrote:
Oh, the spec file overrides CFLAGS and doesn't contain -std=gnu99?
The config.log for RHEL5/x86_64 says:
CFLAGS='-std=gnu99 -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector
On 5/30/10 2:49 PM, Axel Thimm axel.th...@atrpms.net wrote:
How are your %optflags (which is the same as $RPM_OPT_FLAGS) merged
into the build if it is not passed to make? And it would yield the
same CFLAGS as above (merged default optflags with what configure adds
to it).
They're
Hi David,
-Original Message-
From: David Halik
It looks like we're still working towards a layer 7 solution anyway.
Right now we have one of our student programmers hacking Perdition
with
a new plugin for dynamic username caching, storage, and automatic fail
over. If we get it
Hi David,
-Original Message-
From: David Halik
I've been running both patches and so far they're stable with no new
crashes, but I haven't really seen any better behavior, so I don't
know if it's accomplishing anything. =)
Still seeing entire uidlist list dupes after the list
David,
-Original Message-
From: dovecot-bounces+brandond=uoregon@dovecot.org
[mailto:dovecot-
There are ways of doing this in mysql, with heartbeats etc (which
we've
discussed before), but then I'm back to mysql again. Maybe mysql just
has to be the way to go in this case.
David,
-Original Message-
From: David Halik [mailto:dha...@jla.rutgers.edu]
*sigh*, it looks like there still might be the occasional user visible
issue. I was hoping that once the assert stopped happening, and the
process stayed alive, that the users wouldn't see their inbox
David,
Though we aren't using NFS we do have a BigIP directing IMAP and POP3
traffic to multiple dovecot stores. We use mysql authentication and
the
proxy_maybe option to keep users on the correct box. My tests using
an
external proxy box didn't significantly reduce the load on the stores
Timo,
-Original Message-
From: Timo Sirainen [mailto:t...@iki.fi]
On 25.1.2010, at 21.30, Brandon Davidson wrote:
If it could be set up to just fall back to
using a local connection in the event of a SQL server outage, that
might
help things a bit. Anyone know how that might
Timo,
On 1/25/10 12:31 PM, Timo Sirainen t...@iki.fi wrote:
I don't think it's immediate.. But it's probably something like:
- notice it's not working - reconnect
- requests are queued
- reconnect fails, hopefully soon, but MySQL connect at least fails in max.
10 seconds
- reconnect
David,
-Original Message-
From: dovecot-bounces+brandond=uoregon@dovecot.org
[mailto:dovecot-
Our physical setup is 10 Centos 5.4 x86_64 IMAP/POP servers, all with
the same NFS backend where the index, control, and Maildir's for the
users reside. Accessing this are direct
Cor,
On 1/22/10 1:05 PM, Cor Bosman c...@xs4all.nl wrote:
Pretty much the same as us as well. 35 imap servers. 10 pop servers.
clustered pair of 6080s, with about 250 15K disks. We're seeing some
corruption as well. I myself am using imap extensively and regularly have
problems with my
David,
On 1/22/10 12:34 PM, David Halik dha...@jla.rutgers.edu wrote:
We currently have IP session 'sticky' on our L4's and it didn't help all
that much. yes, it reduces thrashing on the backend, but ultimately it
won't help the corruption. Like you said, multiple logins will still go
to
Hi David,
On 1/14/10 3:13 PM, David Halik dha...@jla.rutgers.edu wrote:
FYI, we backed out of the noac change today. When our 20K accounts
started coming to work the NetApp NFS server was pushing 70% CPU usage
and 25K NFS Ops/s, which resulted in all kinds of other havoc as normal
services
Timo,
-Original Message-
From: Timo Sirainen
1721 is not in the recs[] list, since it's sorted and the first one is
1962.
So there's something weird going on why it's in the filename hash
table, but
not in the array. I'll try to figure it out later..
I hope your move is going
Timo,
On 12/23/09 8:37 AM, David Halik dha...@jla.rutgers.edu wrote:
I switched all of our servers to dotlock_use_excl=no last night, but
we're still seeing the errors:
We too have set dotlock_use_excl = no. I'm not seeing the Stale NFS file
handle message any more, but I am still seeing a
We've started seeing the maildir_uidlist_records_array_delete assert crash as
well. It always seems to be preceded by a 'stale NFS file handle' error from a
the same user on a different connection.
Dec 22 10:12:20 oh-popmap5p dovecot: imap: user=apbao, rip=a.a.a.a, pid=2439:
Hi Timo,
We've been running Dovecot with Maildir on NFS for quite a while - since
back in the 1.0 days I believe. I'm somewhat new here. Anyway...
The Wiki article on NFS states that 1.1 and newer will flush attribute
caches if necessary with mail_nfs_storage=yes. We're running 1.2.8 with
that
Timo,
-Original Message-
I'm not really sure why these are happening. I anyway changed them from
being assert-crashes to just logged errors. I'm interested to find out
what it logs now and if there are any user-visible errors.
http://hg.dovecot.org/dovecot-1.2/rev/e47eb506eebd
-Original Message-
On Sun, 2009-11-22 at 23:54 +0100, Edgar Fuß wrote:
I'm getting this Panic with some users on dovecot-1.2.7:
Panic: file maildir-uidlist.c: line 1242
(maildir_uidlist_records_drop_expunges): assertion failed: (recs[i]-
uid rec-
uid)
I'm not really sure
We've had this reoccur twice this week. In both cases, it seems to hit a
swath of machines all within a few minutes. For some reason it's been
limited to the master serving pop3 only. In all cases, the logging
socket at fd 5 had gone missing.
I haven't applied the fd leak detection patch, but I
Hi Timo,
-Original Message-
From: Timo Sirainen [mailto:t...@iki.fi]
On Thu, 2009-10-29 at 12:08 -0700, Brandon Davidson wrote:
I haven't applied the fd leak detection patch, but I do have lsof
output
and a core file available here:
http://uoregon.edu/~brandond/dovecot-1.2.6
Thomas,
On 10/22/09 1:29 AM, Thomas Hummel hum...@pasteur.fr wrote:
On Wed, Oct 21, 2009 at 09:39:22AM -0700, Brandon Davidson wrote:
As a contrasting data point, we run NFS + random redirects with almost no
problems.
Thanks for your answer as well.
What mailbox format are you using
Hi Marco,
On 10/22/09 1:50 AM, Marco Nenciarini mnen...@prato.linux.it wrote:
This morning it happened another time, another time during the daily
cron execution.
Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev
0.12 inode 1005
Oct 22 06:26:57 server dovecot:
On 10/21/09 8:59 AM, Guy wyldf...@gmail.com wrote:
Our current setup uses two NFS mounts accessed simultaneously by two
servers. Our load balancing tries to keep a user on the same server whenever
possible. Initially we just had roundrobin load balancing which led to index
corruption.
The
I seem to have run into the same issue on two of our 12 Dovecot servers
this morning:
Oct 15 03:41:51 oh-popmap5p dovecot: dovecot: child 7529 (login)
returned error 89 (Fatal failure)
Oct 15 03:41:51 oh-popmap5p dovecot: dovecot: child 7532 (login)
returned error 89 (Fatal failure)
Oct 15
Hi Timo,
-Original Message-
From: Timo Sirainen [mailto:t...@iki.fi]
This just shouldn't be happening. Are you using NFS? Anyway this
should
replace the crash with a nicer error message:
http://hg.dovecot.org/dovecot-1.2/rev/6c6460531514
Yes, we've got a pool of servers with
On Red Hat based distros, do:
echo 'DAEMON_COREFILE_LIMIT=unlimited' /etc/sysconfig/dovecot
service dovecot restart
Might be worth putting in the wiki if it's not there already?
-Brad
-Original Message-
== /var/log/dovecot/dovecot.log ==
Oct 15 09:07:33 master: Info:
We recently upgraded from Dovecot 1.2.4 to 1.2.6 (with the sieve patches
of course). Everything has been running quite well since the upgrade.
The occasional issue with assert-crashing when expunging has gone away.
However, one of our users seems to have triggered a new issue. She's
been the only
Timo,
-Original Message-
-O2 compiling has dropped one stage from the backtrace, but I think
this
will fix the crash:
snip
I guess it would be time for 1.2.7 somewhat soon..
Thanks! As always, you're one step ahead of us with the bug fixes! I've
got one more for you that just popped
Hi all,
We have a number of machines running Dovecot 1.2.4 that have been assert
crashing occasionally. It looks like it's occurring when the users
expunge their mailboxes, but I'm not sure as I can't reproduce it
myself. The error in the logs is:
Oct 6 07:33:09 oh-popmap3p dovecot: imap:
Hi all,
We recently attempted to update our Dovecot installation to version
1.2.5. After doing so, we noticed a constant stream of crash messages in
our log file:
Sep 22 15:58:41 hostname dovecot: imap-login: Login: user=USERNAME,
method=PLAIN, rip=X.X.X.X, lip=X.X.X.X, TLS
Sep 22 15:58:41
Tom,
Tom Diehl wrote:
I just updated to dovecot 1.2.5 on centos5.
1.2.4 did not show this problem. I am going to roll back for the time being
but I am willing to do whatever I need to to fix this.
This is an x86_64 system. filesystem is ext3.
I am now seeing the following in the logs:
Sep
70 matches
Mail list logo