Yes, sorry, Wesley Craig's response pointed me in that direction and that
definitely seems to be the problem.
How many mailboxes are returned by these LIST operations? We run Horde
with Cyrus here, but we have no shared mailboxes and no problems with high
load on the Cyrus frontends.
We set the following in imapd.conf:
sharedprefix: ~ Public Folders
(We don't use altnamespace.)
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ:
Wesley Craig wrote:
On 08 Dec 2009, at 18:55, Andrew Morgan wrote:
How many mailboxes are returned by these LIST operations? We run
Horde with Cyrus here, but we have no shared mailboxes and no
problems with high load on the Cyrus frontends.
How do you have IMP configured? As I
On 12/09/2009 10:43 AM, John Madden wrote:
Yes, sorry, Wesley Craig's response pointed me in that direction and that
definitely seems to be the problem.
How many mailboxes are returned by these LIST operations? We run Horde
with Cyrus here, but we have no shared mailboxes and no problems with
0.23 seconds on a 35MB mailboxes file. I thought I saw in one of you
other e-mails that yours was taking about one second?
Yeah, .95 seconds in my case. Even with a 4-cpu box, our user load
makes that intolerable, the latency causes things to back up.
John
--
John Madden
Sr UNIX Systems
read(0, 0002 LIST \\ INBOX.*\r\n, 4096) = 26
read(0, 0003 LIST \\ user.*\r\n, 4096) = 25
read(0, 0004 LIST \\ *\r\n, 4096) = 20
Those LIST queries seem a little odd coming from a normal user account in
Cyrus. Are you logging in as a Cyrus admin account? Why is the client
On Tue, 8 Dec 2009, John Madden wrote:
read(0, 0002 LIST \\ INBOX.*\r\n, 4096) = 26
read(0, 0003 LIST \\ user.*\r\n, 4096) = 25
read(0, 0004 LIST \\ *\r\n, 4096) = 20
Those LIST queries seem a little odd coming from a normal user account in
Cyrus. Are you logging in as a
Do your users have access to each other's mailboxes? Is there are a large
number of results?
a02 namespace
* NAMESPACE ((INBOX. .)) ((user. .)) (( .))
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page:
On Tue, 8 Dec 2009, John Madden wrote:
Do your users have access to each other's mailboxes? Is there are a large
number of results?
a02 namespace
* NAMESPACE ((INBOX. .)) ((user. .)) (( .))
Do each of these LIST operations take a long time to perform?
read(0, 0002 LIST \\
read(0, 0002 LIST \\ INBOX.*\r\n, 4096) = 26
read(0, 0003 LIST \\ user.*\r\n, 4096) = 25
read(0, 0004 LIST \\ *\r\n, 4096) = 20
Maybe I'm losing track of the original thread... You said there was high
cpu usage and slowdown during login. Did you track it back to these LIST
On Dec 8, 2009, at 4:03 PM, John Madden wrote:
Do your users have access to each other's mailboxes? Is there are a large
number of results?
a02 namespace
* NAMESPACE ((INBOX. .)) ((user. .)) (( .))
We set the following in imapd.conf:
sharedprefix: ~ Public Folders
That should avoid
On Tue, 8 Dec 2009, John Madden wrote:
read(0, 0002 LIST \\ INBOX.*\r\n, 4096) = 26
read(0, 0003 LIST \\ user.*\r\n, 4096) = 25
read(0, 0004 LIST \\ *\r\n, 4096) = 20
Maybe I'm losing track of the original thread... You said there was high
cpu usage and slowdown during login.
On 08 Dec 2009, at 18:55, Andrew Morgan wrote:
How many mailboxes are returned by these LIST operations? We run
Horde with Cyrus here, but we have no shared mailboxes and no
problems with high load on the Cyrus frontends.
How do you have IMP configured? As I recall, at least in older
In a 2.3.15 murder I'm seeing high frontend server cpu usage and I'm
wondering if it's normal or if there's something I can do to reduce it.
What are the bottlenecks that can lead to proxyd pegging a cpu for
several seconds, particularly during login (using imapproxy to reduce
logins appears
On 07 Dec 2009, at 10:33, John Madden wrote:
In a 2.3.15 murder I'm seeing high frontend server cpu usage and I'm
wondering if it's normal or if there's something I can do to reduce
it.
What are the bottlenecks that can lead to proxyd pegging a cpu for
several seconds, particularly during
At a guess, it sounds like load from LIST. You should be able to see
what's causing the load if you have several seconds. For example,
enable telemetry and look for long turn around. Or use strace (or
equiv). Have you experimented with foolstupidclients? What is your
client mix?
At a guess, it sounds like load from LIST. You should be able to see
what's causing the load if you have several seconds. For example,
enable telemetry and look for long turn around. Or use strace (or
equiv). Have you experimented with foolstupidclients? What is your
client mix?
On Mon, 7 Dec 2009, John Madden wrote:
At a guess, it sounds like load from LIST. You should be able to see
what's causing the load if you have several seconds. For example,
enable telemetry and look for long turn around. Or use strace (or
equiv). Have you experimented with
-Original Message-
From: Andrew Morgan mor...@orst.edu
Sent: December 7, 2009 5:37 PM
To: John Madden jmad...@ivytech.edu
Cc: info-cyrus@lists.andrew.cmu.edu info-cyrus@lists.andrew.cmu.edu
Subject: Re: proxyd cpu usage
On Mon, 7 Dec 2009, John Madden wrote:
At a guess, it sounds like
On Mon, Dec 07, 2009 at 01:37:28PM -0800, Andrew Morgan wrote:
On Mon, 7 Dec 2009, John Madden wrote:
At a guess, it sounds like load from LIST. You should be able to see
what's causing the load if you have several seconds. For example,
enable telemetry and look for long turn around.
20 matches
Mail list logo