Note that ext3 effectively does the same thing as ZFS on fsync() - because
the journal layer is block based and does no know which block belongs
to which file, the entire journal must be applied to the filesystem to
achieve the expected fsync() symantics (at least, with data=ordered,
it
Certainly data journalling is the exception, rather than the rule.
Off the top of my head, I can't think of another mainstream
filesystem that does it (aside from the various log-structured
filesystems such as Waffle and Reiser4).
AFAIK you get it with UFS + gjournal, dunno if that counts as
On Tue, 27 Nov 2007, Andrew McNamara wrote:
Certainly data journalling is the exception, rather than the rule.
Off the top of my head, I can't think of another mainstream
filesystem that does it (aside from the various log-structured
filesystems such as Waffle and Reiser4).
AFAIK you get it
Andrew McNamara wrote:
Note that ext3 effectively does the same thing as ZFS on fsync() - because
the journal layer is block based and does no know which block belongs
to which file, the entire journal must be applied to the filesystem to
achieve the expected fsync() symantics (at least, with
On Thu, 22 Nov 2007, Gabor Gombas wrote:
On Tue, Nov 20, 2007 at 09:56:37AM -0800, David Lang wrote:
for cyrus you should have the same sort of requirements that you would have
for
a database server, including the fact that without a battery-backed disk
cache
(or solid state drive) to
On Tue, Nov 20, 2007 at 09:56:37AM -0800, David Lang wrote:
for cyrus you should have the same sort of requirements that you would have
for
a database server, including the fact that without a battery-backed disk
cache
(or solid state drive) to handle your updates, you end up being
On 20 Nov 07, at 1756, David Lang wrote:
however a fsync on a journaled filesystem just means the data needs
to be
written to the journal, it doesn't mean that the journal needs to
be flushed to
disk.
on ext3 if you have data=journaled then your data is in the journal
as well and
all
On Wed, 21 Nov 2007, Ian G Batten wrote:
however a fsync on a journal ed filesystem just means the data needs to be
written to the journal, it doesn't mean that the journal needs to be
flushed to
disk.
on ext3 if you have data=journal ed then your data is in the journal as well
and
all
Vincent Fox [EMAIL PROTECTED] wrote:
This thought has occurred to me:
ZFS prefers reads over writes in it's scheduling.
I think you can see where I'm going with this. My WAG is something
related to Pascal's, namely latency. What if my write requests to
mailboxes.db
or deliver.db start
I am wondering about the use of fsync() on journal'd file systems
as described below. Shouldn't there be much less use of (or very
little use) of fsync() on these types of systems? Let the journal
layer due its job and not force it within cyrus? This would likely
save a lot of system overhead.
We went through a similar discussion last year in OpenAFS land, and
came the same conclusion -- basically, if your filesystem is
reasonably reliable (such as ZFS is), and you can trust your
underlying storage not to lose transactions that are in-cache during a
'bad event', the added
On Tue, 20 Nov 2007, Ian G Batten wrote:
On 20 Nov 07, at 1332, Michael R. Gettes wrote:
I am wondering about the use of fsync() on journal'd file systems
as described below. Shouldn't there be much less use of (or very
little use) of fsync() on these types of systems? Let the journal
Rob Banz [EMAIL PROTECTED] wrote:
We went through a similar discussion last year in OpenAFS land, and
came the same conclusion -- basically, if your filesystem is
reasonably reliable (such as ZFS is), and you can trust your
underlying storage not to lose transactions that are in-cache during
Pascal Gienger wrote:
Rob Banz [EMAIL PROTECTED] wrote:
We went through a similar discussion last year in OpenAFS land, and
came the same conclusion -- basically, if your filesystem is
reasonably reliable (such as ZFS is), and you can trust your
underlying storage not to lose transactions
On Nov 20, 2007, at 15:38, Ken Murchison wrote:
Pascal Gienger wrote:
Rob Banz [EMAIL PROTECTED] wrote:
We went through a similar discussion last year in OpenAFS land, and
came the same conclusion -- basically, if your filesystem is
reasonably reliable (such as ZFS is), and you can trust
On Nov 20, 2007, at 14:57, Pascal Gienger wrote:
Rob Banz [EMAIL PROTECTED] wrote:
We went through a similar discussion last year in OpenAFS land, and
came the same conclusion -- basically, if your filesystem is
reasonably reliable (such as ZFS is), and you can trust your
underlying
Would'nt it be nice to have a configuration option to completely
turn off
fsync() in Cyrus? If you want, with a BIG WARNING in the doc stating
NOT TO
USE IT unless you know what you doing. :)
Its already in imapd.conf(8):
skiplist_unsafe
I see most of our writes going to the spool
On 17 Nov 07, at 0909, Rob Mueller wrote:
This shouldn't really be a problem. Yes the whole file is locked
for the
duration of the write, however there should be only 1 fsync per
transaction, which is what would introduce any latency. The
actual writes
to the db file itself should be
On Mon, Nov 19, 2007 at 08:50:16AM +, Ian G Batten wrote:
On 17 Nov 07, at 0909, Rob Mueller wrote:
This shouldn't really be a problem. Yes the whole file is locked for the
duration of the write, however there should be only 1 fsync per
transaction, which is what would introduce any
In production releases of ZFS fsync() essentially triggers sync() (fixed in
Solaris Next).
[...]
Skiplist requires two fsync calls per transaction (single
untransactioned actions are also one transaction), and it
also locks the entire file for the duration of said
transaction, so you can't
On Tue, 20 Nov 2007 15:40:58 +1100, Andrew McNamara [EMAIL PROTECTED] said:
In production releases of ZFS fsync() essentially triggers sync() (fixed
in
Solaris Next).
[...]
Skiplist requires two fsync calls per transaction (single
untransactioned actions are also one transaction),
On Mon, 19 Nov 2007 22:51:43 -0800, Vincent Fox [EMAIL PROTECTED] said:
Bron Gondwana wrote:
Lucky we run reiserfs then, I guess...
I suppose this is inappropriate topic-drift, but I wouldn't be
too sanguine about Reiser. Considering the driving force behind
it is in a murder
Bron Gondwana wrote:
Lucky we run reiserfs then, I guess...
I suppose this is inappropriate topic-drift, but I wouldn't be
too sanguine about Reiser. Considering the driving force behind
it is in a murder trial last I heard, I sure hope the good bits of that
filesystem get turned over to
This is where I think the actual user count may really influence this
behavior. On our system, during heavy times, we can see writes to the
mailboxes file separated by no more than 5-10 seconds.
If you're constantly freezing all cyrus processes for the duration of
those writes, and those
Rob Mueller [EMAIL PROTECTED] wrote:
About 30% of all I/O is to mailboxes.db, most of which is read. I
haven't personally deployed a split-meta configuration, but I
understand the meta files are similarly heavy I/O concentrators.
That sounds odd.
Given the size and hotness of
--On Friday, November 16, 2007 7:39 AM +0100 Pascal Gienger
[EMAIL PROTECTED] wrote:
Solaris 10 does this in my case. Via dtrace you'll see that open() on the
mailboxes.db and read-calls do not exceed microsecond ranges.
mailboxes.db is not the problem here. It is entirely cached and rarely
On 15 Nov 07, at 1504, Michael Bacon wrote:
Interesting thought. We haven't gone to ZFS yet, although I like
the idea
a lot. My hunch is it's an enormous win for the mailbox
partitions, but
perhaps it's not a good thing for the meta partition. I'll have to
let
someone else who knows
Dale Ghent wrote:
On Nov 16, 2007, at 1:39 AM, Pascal Gienger wrote:
Solaris 10 does this in my case. Via dtrace you'll see that open()
on the
mailboxes.db and read-calls do not exceed microsecond ranges.
mailboxes.db
is not the problem here. It is entirely cached and rarely written
On Nov 16, 2007, at 1:39 AM, Pascal Gienger wrote:
Solaris 10 does this in my case. Via dtrace you'll see that open()
on the
mailboxes.db and read-calls do not exceed microsecond ranges.
mailboxes.db
is not the problem here. It is entirely cached and rarely written
(creating, deleting
On 15 Nov 2007, at 18:25, Rob Mueller wrote:
About 30% of all I/O is to mailboxes.db, most of which is read. I
haven't personally deployed a split-meta configuration, but I
understand the meta files are similarly heavy I/O concentrators.
That sounds odd.
Yeah, it's not right. I was reading
Dale Ghent wrote:
On Nov 16, 2007, at 2:56 PM, Ken Murchison wrote:
Dale Ghent wrote:
On Nov 16, 2007, at 1:39 AM, Pascal Gienger wrote:
Solaris 10 does this in my case. Via dtrace you'll see that open()
on the
mailboxes.db and read-calls do not exceed microsecond ranges.
mailboxes.db
Michael Bacon [EMAIL PROTECTED] wrote:
I have heard tell of funny behavior that ZFS does if you've got
battery-backed write caches on your arrays.
/etc/system:
set zfs:zfs_nocacheflush=1
is your friend. Without that, ZFS' performance on hardware arrays with
large RAM caches is abysmal.
Interesting thought. We haven't gone to ZFS yet, although I like the idea
a lot. My hunch is it's an enormous win for the mailbox partitions, but
perhaps it's not a good thing for the meta partition. I'll have to let
someone else who knows more about ZFS and write speeds vs. read speeds
/etc/system:
set zfs:zfs_nocacheflush=1
Yep already doing that, under Solaris 10u4. Have dual array controllers in
active-active mode. Write-back cache is enabled. Just poking in the 3510FC
menu shows cache is ~50% utilized so it does appear to be doing some work.
Cyrus Home Page:
On 14 Nov 2007, at 23:15, Vincent Fox wrote:
We have all Cyrus lumped in one ZFS pool, with separate filesystems
for
imap, mail, sieve, etc. However, I do have an unused disk in each
array
such that I could setup a simple ZFS mirror pair for /var/cyrus/
imap so
that the databases are
About 30% of all I/O is to mailboxes.db, most of which is read. I
haven't personally deployed a split-meta configuration, but I
understand the meta files are similarly heavy I/O concentrators.
That sounds odd.
Given the size and hotness of mailboxes.db, and in most cases the size of
On Thu, Nov 15, 2007 at 01:29:54PM -0500, Wesley Craig wrote:
On 14 Nov 2007, at 23:15, Vincent Fox wrote:
We have all Cyrus lumped in one ZFS pool, with separate filesystems
for
imap, mail, sieve, etc. However, I do have an unused disk in each
array
such that I could setup a
Michael Bacon wrote:
Solid state disk for the partition with the mailboxes database.
This thing is amazing. We've got one of the gizmos with a battery
backup and a RAID array of Winchester disks that it writes off to if
it loses power, but the latency levels on this thing are
On Nov 14, 2007, at 15:20, Michael Bacon wrote:
Sun doesn't make any SSDs, I don't think, but while I'm not certain, I
think the RamSan line (http://www.superssd.com/products/ramsan-400/)
has
some sort of partnership with Sun. To be honest, I'm not sure which
brand
we're using, but
Sun doesn't make any SSDs, I don't think, but while I'm not certain, I
think the RamSan line (http://www.superssd.com/products/ramsan-400/) has
some sort of partnership with Sun. To be honest, I'm not sure which brand
we're using, but like RamSan, it's a FC disk that slots into our SAN like
The whole meta partition as of 1.6 (so no fancy splitting of mailbox
metadata), minus the proc directory, which is on tmpfs.
-Michael
--On Wednesday, November 14, 2007 4:32 PM -0500 Rob Banz [EMAIL PROTECTED]
wrote:
On Nov 14, 2007, at 15:20, Michael Bacon wrote:
Sun doesn't make any
This thought has occurred to me:
ZFS prefers reads over writes in it's scheduling.
I think you can see where I'm going with this. My WAG is something
related to Pascal's, namely latency. What if my write requests to
mailboxes.db
or deliver.db start getting stacked up, due to the favoritism
On Sun, 11 Nov 2007, Bron Gondwana wrote:
250,000 mailboxes, 1,000 concurrent users, 60 million emails, 500k
deliveries/day. For us, backups are the worst thing, followed by
reiserfs's use of BLK, followed by the need to use a ton of disks to
keep up with the i/o.
For us backups are
On Tue, Nov 13, 2007 at 10:24:22AM +, David Carter wrote:
On Sun, 11 Nov 2007, Bron Gondwana wrote:
250,000 mailboxes, 1,000 concurrent users, 60 million emails, 500k
deliveries/day. For us, backups are the worst thing, followed by
reiserfs's use of BLK, followed by the need to use
On Tue, 13 Nov 2007, Bron Gondwana wrote:
If you're planning to lift a consistent copy of a .index file, you need
to lock it for the duration of reading it (read lock at least).
mailbox_lock_index() blocks flag updates (but this doesn't seem to be
something that imapd worries about when
At the risk of being yet one more techie who thinks he has a workaround...
I'm back (in the past two months) doing Cyrus administration after a three
year break. I ran Cyrus instance at Duke University before, and am now
getting up to speed to run the one at UNC. At Duke we started as a
Hi,
We run a 35,000 mailbox system with no problems on Solaris. We did
a few years back have a bad time with using Berkeley DB and cyrus and
switching to skiplist fixed that. I believe that problem has been
solved though. I would recommend using multiple spools however, having
one big
On Fri, Nov 09, 2007 at 01:28:05PM -0500, John Madden wrote:
On Fri, 2007-11-09 at 19:10 +0100, Jure Pečar wrote:
I'm still on linux and was thinking a lot about trying out solaris 10,
but
stories like yours will make me think again about that ...
Agreed -- with the things I see from the
It seems silly to spend all the money for a T2000 with redundancies
and SAN and etc. And then have it choke up when it hits (for us) about
10K users. It seems everyone we talk to scratches their head why this
system with all it's cores would choke. We have found even older Sun
V210 are
Eric Luyten wrote:
Another thought : if your original problem is related to a locking issue
of shared resources, visible upon imapd process termination, the rate of
writing new messages to the spool does not need to be a directly contri-
buting factor.
Were you experiencing the load problem
Jure Pečar wrote:
I'm still on linux and was thinking a lot about trying out solaris 10, but
stories like yours will make me think again about that ...
We are I think an edge case, plenty of people running Solaris Cyrus no
problems.
To me ZFS alone is enough reason to go with Solaris. I
Jure Pečar wrote:
In my expirience the brick wall you describe is what happens when disks
reach a certain point of random IO that they cannot keep up with.
The problem with a technical audience, is that everyone thinks they have
a workaround
or probable fix you haven't already thought of.
On Fri, 09 Nov 2007 09:40:25 -0800
Vincent Fox [EMAIL PROTECTED] wrote:
If there's something that 3 admins could do to alleviate load we did it.
The bigger problem I am seeing is that Cyrus doesn't in our
usage seem to ramp load smoothly or even predictably. It goes
fine up to a certain
On Nov 8, 2007 4:56 PM, Dan White [EMAIL PROTECTED] wrote:
Michael D. Sofka wrote:
On Thursday 04 October 2007 07:32:52 pm Rob Mueller wrote:
4. Lots of other little things
a) putting the proc dir on tmpfs is a good idea
b) make sure you have the right filesystem (on linux, reiserfs is
To close the loop since I started this thread:
We still haven't finished up the contract to get Sun out here to
get to the REAL bottom of the problem.
However observationally we find that under high email usage that
above 10K users on a Cyrus instance things get really bad. Like last
week we
Michael D. Sofka wrote:
On Thursday 04 October 2007 07:32:52 pm Rob Mueller wrote:
4. Lots of other little things
a) putting the proc dir on tmpfs is a good idea
b) make sure you have the right filesystem (on linux, reiserfs is much
better than ext3 even with ext3s dir hashing) and
On Thu, Nov 08, 2007 at 10:18:04AM -0800, Vincent Fox wrote:
Our latest line of investigation goes back to the Fastmail suggestion,
simply
have multiple Cyrus binary instances on a system. Each running it's own
config and with it's own ZFS filesystems out of the pool to use.
Since we can
On Thursday 08 November 2007 10:56:54 am Dan White wrote:
Michael D. Sofka wrote:
On Thursday 04 October 2007 07:32:52 pm Rob Mueller wrote:
4. Lots of other little things
a) putting the proc dir on tmpfs is a good idea
b) make sure you have the right filesystem (on linux, reiserfs is
Bron Gondwana wrote:
On Thu, Nov 08, 2007 at 10:18:04AM -0800, Vincent Fox wrote:
Our latest line of investigation goes back to the Fastmail suggestion,
simply
have multiple Cyrus binary instances on a system. Each running it's own
config and with it's own ZFS filesystems out of the
However observationally we find that under high email usage that
above 10K users on a Cyrus instance things get really bad. Like last
week we had a T2000 at about 10,500 users and loads of 5+ and it
was bogging down. We moved 1K users off bringing it down to
9,500 and loads dropped to
Bron Gondwana wrote:
Also virtual interfaces means you can move an instance without having
to tell anyone else about it (but it sounds like you're going with an
all eggs in one basket approach anyway)
No, not all eggs in one basket, but better usage of resources.
It seems silly to spend
Vincent Fox [EMAIL PROTECTED] wrote:
Our working hypothesis is that CYRUS is what is choking up at a certain
activity level due to bottlenecks with simultaneous access to some shared
resource for each instance.
Did you do a
lockstat -Pk sleep 30
(with -x destructive when it complains about
On Sat, 6 Oct 2007, Rob Mueller wrote:
As it turns out, the memory leaks weren't critical, because the the pages do
seem to be reclaimed when needed, though it was annoying not knowing exactly
how much memory was really free/used. The biggest problem was that with
cyrus you have millions of
On Tue, 9 Oct 2007, Andrew Morgan wrote:
On Sat, 6 Oct 2007, Rob Mueller wrote:
As it turns out, the memory leaks weren't critical, because the the pages do
seem to be reclaimed when needed, though it was annoying not knowing exactly
how much memory was really free/used. The biggest problem
Yesterday I checked my own Cyrus servers to see if I was running out of
lowmem, and it sure looked like it. Lowmem had only a couple MB free, and
I had 2GB of free memory that was not being used for cache.
I checked again today and everything seems to be fine - 150MB of lowmem
free and
You can also use
vm.lower_zone_protection=size_in_mb
to protect portions of low memory. This doesn't help with caching issues but
can help prevent the kernel from getting cornered and resorting to oom-killer.
We haven't tested everything in our enviroment at 64bit so we've used lower
zone
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sat, Oct 06, 2007 at 09:26:38AM -0700, Vincent Fox wrote:
ZFS with mirrors across 2 separate storage devices, means never having
to say you're sorry.
Are you using it under Linux/Fuse or OpenSolaris or other?
- --
Regards... Todd
On Oct 6, 2007, at 5:50 AM, Rob Mueller wrote:
If it wasn't IO limit related, and it wasn't CPU limit related,
then there
must be some other single resource that things were contending for.
My only guess then is it's some global kernel lock or the like.
When the load skyrocketed, it must
I suppose that 8 SATA disks for the data and four 15k SAS disks for the
metadata would be a good mix.
Yes. As I mentioned, our iostat data shows that meta-data is MUCH hotter
than email spool data.
---
Checking iostat, a rough estimate shows meta data get 2 x the rkB/s and 3 x
the wkB/s vs
A data point regarding reiserfs/ext3:
We are in the process of moving from reiserfs to ext3 (with dir_index).
ext3 seems to do substantially better than reiserfs for us, especially for
read heavy loads (squatter runs at least twice as fast as it used do).
Are you comparing an old reiserfs
The iostat and sar data disagrees with it being an I/O issue.
16 gigs of RAM with about 4-6 of it being used for Cyrus
leaves plenty for ZFS caching. Our hardware seemed more than
adequate to anyone we described it to.
Yes beyond that it's anyone guess.
If it wasn't IO limit related, and
I think what truly scares me about reiser is those rather regular
posts to various mailing lists I'm on saying my reiser fs went poof
and lost all my data, what should I do?
I've commented on this before. I believe it's absolutely hardware related
rather than reiserfs related.
On Sat, 6 Oct 2007, Rob Mueller wrote:
Are you comparing an old reiserfs partition with a new ext3 one where
you've just copied the email over to? If so, that's not a fair comparison.
No, a newly created partitions in both cases. Fragmented partitions are
slower still of course.
Give it a
Are you comparing an old reiserfs partition with a new ext3 one where
you've just copied the email over to? If so, that's not a fair
comparison.
No, a newly created partitions in both cases. Fragmented partitions are
slower still of course.
That's strange. What mount options are/were you
On Sat, 6 Oct 2007, Rob Mueller wrote:
That's strange. What mount options are/were you using? We use/used:
reiserfs - rw,noatime,nodiratime,notail,data=journal
ext3 - noatime,nodiratime,data=journal
Same, but data=ordered in both cases
If you weren't using notail on reiserfs, that would
Personally, I've seen Solaris bottlenecking on file opens in large
directories. This was a while ago, but it was one of the major
reason we switched to Linux -- the order of magnitude improvement in
directory scale was sure handy for 80-90K users with no quota. The
kind of blocking I'm
Vincent Fox wrote:
Wondering if anyone out there is running a LARGE Cyrus
user-base on a single or a couple of systems?
Let me define large:
25K-30K (or more) users per system
High email activity, say 2+ million emails a day
our user based is split over 6 backends:
On Oct 5, 2007, at 10:01, John Madden wrote:
I think that this is partly because ext3 does more aggressive read
ahead
(which would be a mixed blessing under heavy load), partly because
reiserfs suffers from fragmentation. I imagine that there is
probably a
tipping point under the
We have also come across the situation where we needed to make a move from
file systems. We ended up going with Veritas file system which we saw the
greatest increase in performance from simply changing the main file system.
It was a big win when we realized the file system actually didn't cost
On 04 Oct 2007, at 18:33, Vincent Fox wrote:
Interesting, but this is approximately 15K users per backend.
Which is
where we are now after 30K users per backend were crushed. I am much
more interested in exploring whether Cyrus hits some tipping point
where
a single backend cannot
On Thursday 04 October 2007 07:32:52 pm Rob Mueller wrote:
4. Lots of other little things
a) putting the proc dir on tmpfs is a good idea
b) make sure you have the right filesystem (on linux, reiserfs is much
better than ext3 even with ext3s dir hashing) and journaling modes
On a Murder
We have talked to UCSB, which is running 30K users on a single
Sun V490 system. However they seem to have fairly low activity
levels with emails in the hundred-thousands range not millions.
We've got around 250k users on a single system, but we're in that same
boat: only about 300k
Wondering if anyone out there is running a LARGE Cyrus
user-base on a single or a couple of systems?
Let me define large:
25K-30K (or more) users per system
High email activity, say 2+ million emails a day
We have talked to UCSB, which is running 30K users on a single
Sun V490 system. However
We run a single dell 2850 (2-dual @ 2.8GHz, 8gb ram and 900gb internal).
have about 29k users... but our message transfer load is much smaller than
what you describe... may be in the order of 10k the systems is at 80+ idle
most of the time.
We will have this setup for about 1 yr now... no
I suppose I should have given a better description:
University mail setup with 60K-ish faculty, staff, and students
all in one big pool no separation into this server for faculty
and this one for students, etc.
Load-balanced pools of smallish v240 class servers for:
SMTP
MX
AV/spam scanning
Hi,
We have around 35k users spread out on 5 different systems. The
largest of which has 12K active users and 200K messages per day. We do
our anti-spam/anti-virus on other systems before delivering to the 5
mailbox systems. I'm guessing you don't have that type of setup?
Jim
Vincent
On Oct 4, 2007, at 2:41 PM, Vincent Fox wrote:
We spent some time talking to Ken Co. at CMU on the phone
about what happens in very high loads but haven't come to a
fix for what happened to us. There may not be one. I can and
will describe all the nitty-gritty of that post-mortem in a post
One of the things Rob Banz recently did here was to move the data/
config/proc directory from a real fs to tmpfs. This reduces the
disk IO from Cyrus process creation/management.
So the way we do stuff here is that each Cyrus backend has its own
ZFS pool. That zpool is divided up into four
to 200,000 messages a day.
Jack C. Xue
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vincent
Fox
Sent: Thursday, October 04, 2007 2:05 PM
To: info-cyrus@lists.andrew.cmu.edu
Subject: LARGE single-system Cyrus installs?
Wondering if anyone out
Xue, Jack C wrote:
At Marshall University, We have 30K users (200M quota) on Cyrus. We use
a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
nodes
Interesting, but this is approximately 15K users per backend. Which is
where we are now after 30K users per backend were
Anyhow, just wondering if we the lone rangers on this particular
edge of the envelope. We alleviated the problem short-term by
recycling some V240 class systems with arrays into Cyrus boxes
with about 3,500 users each, and brought our 2 big Cyrus units
down to 13K-14K users each which seems
On Thu, Oct 04, 2007 at 03:33:58PM -0700, Vincent Fox wrote:
Xue, Jack C wrote:
At Marshall University, We have 30K users (200M quota) on Cyrus. We use
a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
nodes
Interesting, but this is approximately 15K users per
92 matches
Mail list logo