Stephen Stogner wrote:
Hello,
We have a S10U5 server sharing with zfs sharing up NFS shares. While using
the nfs mount for a log destination for syslog for 20 or so busy mail servers
we have noticed that the throughput becomes severly degraded shortly. I have
tried disabling the zil,
Stephen Stogner wrote:
True we could have all the syslog data be directed towards the host but the
underlying issue remains the same with the performance hit. We have used nfs
shares for log hosts and mail hosts and we are looking towards using a zfs
based mail store with nfs moutnts from
Richard Elling wrote:
I was able to reproduce this in b93, but might have a different
interpretation of the conditions. More below...
Ross Smith wrote:
A little more information today. I had a feeling that ZFS would
continue quite some time before giving an error, and today I've shown
Let's stop feeding the troll...
-Original Message-
From: [EMAIL PROTECTED] on behalf of Richard Elling
Sent: Thu 11/8/2007 11:45 PM
To: can you guess?
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Yager on ZFS
can you guess? wrote:
CERN was using relatively cheap disks
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
William Loewe
I'm using a Sun Fire X4500 Thumper and trying to get some
sense of the best performance I can get from it with zfs.
I'm running without mirroring or raid, and have checksumming
turned off. I built the zfs
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Thomas Garner
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd
to stop responding after 2 hours of running a bittorrent client over
nfs4 from a linux client, causing zfs snapshots to hang and requiring
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of mike
Sent: Wednesday, June 20, 2007 9:30 AM
I would prefer something like 15+1 :) I want ZFS to be able to detect
and correct errors, but I do not need to squeeze all the performance
out of it (I'll be using it as a home
I would very much appreciate hearing from anyone that has experience running
large zfs pools on T2000s created out of vdevs provided as iscsi targets from
X4500s (Thumpers). Please respond with both positive or negative experiences,
either on or off list.
thanks in advance,
paul
Has anyone done benchmarking on the scalability and performance of zpool import
in terms of the number of devices in the pool on recent opensolaris builds?
In other words, what would the relative performance be for zpool import for
the following three pool configurations on multi-pathed 4G FC
From: Eric Schrock [mailto:[EMAIL PROTECTED]
Sent: Monday, February 26, 2007 12:05 PM
The slow part of zpool import is actually discovering the
pool configuration. This involves examining every device on
the system (or every device within a 'import -d' directory)
and seeing if it has
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ed Gould
Sent: Friday, January 26, 2007 3:38 PM
Yes, I agree. I'm sorry I don't have the data that Jim presented at
FAST, but he did present actual data. Richard Elling (I believe it
was
Richard) has also posted some
11 matches
Mail list logo