On Tue, October 12, 2010 18:31, Bob Friesenhahn wrote:
On Tue, 12 Oct 2010, Saxon, Will wrote:
Another article concerning Sandforce performance:
http://www.anandtech.com/show/3667/6
[...]
When I read this I thought that it kind of eliminated Sandforce
drives from consideration as SLOG
Thanks for posting your findings. What was incorrect about the client's
config?
On Oct 7, 2010 4:15 PM, Eff Norwood sm...@jsvp.com wrote:
Figured it out - it was the NFS client. I used snoop and then some dtrace
magic to prove that the client (which was using O_SYNC) was sending very
bursty
The NFS client in this case was VMWare ESXi 4.1 release build. What happened is
that the file uploader behavior was changed in 4.1 to prevent I/O contention
with the VM guests. That means when you go to upload something to the
datastore, it only sends chunks of the file instead of streaming it
On Tue, Oct 12, 2010 at 12:09:44PM -0700, Eff Norwood wrote:
The NFS client in this case was VMWare ESXi 4.1 release build. What
happened is that the file uploader behavior was changed in 4.1 to
prevent I/O contention with the VM guests. That means when you go to
upload something to the
en == Eff Norwood sm...@jsvp.com writes:
en We also tried SSDs as the ZIL which worked ok until they got
en full, then performance tanked. As I have posted before, SSDs
en as your ZIL - don't do it!
yeah, iirc the thread went back and forth between you and I for a few
days,
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Miles Nordin
Sent: Tuesday, October 12, 2010 5:15 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Bursty writes - why?
en == Eff Norwood sm
On Tue, 12 Oct 2010, Saxon, Will wrote:
When I read this I thought that it kind of eliminated Sandforce
drives from consideration as SLOG devices, which is a pity because
the OCZ Vertex 2 EX or Vertex 2 Pro SAS otherwise look like good
candidates.
For obvious reasons, the SLOG is designed to
On Oct 12, 2010, at 3:31 PM, Bob Friesenhahn wrote:
For obvious reasons, the SLOG is designed to write sequentially. Otherwise it
would offer much less benefit. Maybe this random-write issue with Sandforce
would not be a problem?
Isn't writing from cache to disk designed to be
Maybe this random-write issue with Sandforce would not be a
problem?
It is most definitely a problem, as one needs to question the
conventional assertion of a sequential write pattern? I presented
some findings recently at the Nexenta Training Seminar in
Rotterdam. Here is a link to an
Bob Friesenhan wrote:
On Tue, 12 Oct 2010, Saxon, Will wrote:
When I read this I thought that it kind of eliminated Sandforce
drives from consideration as SLOG devices, which is a pity because
the OCZ Vertex 2 EX or Vertex 2 Pro SAS otherwise look like good
candidates.
For obvious reasons,
The NFS client that we're using always uses O_SYNC, which is why it was
critical for us to use the DDRdrive X1 as the ZIL. I was unclear on the entire
system we're using, my apologies. It is:
OpenSolaris SNV_134
Motherboard: SuperMicro X8DAH
RAM: 72GB
CPU: Dual Intel 5503 @ 2.0GHz
ZIL: DDRdrive
Figured it out - it was the NFS client. I used snoop and then some dtrace magic
to prove that the client (which was using O_SYNC) was sending very bursty
requests to the system. I tried a number of other NFS clients with O_SYNC as
well and got excellent performance when they were configured
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks
connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2
spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it,
the writes are always very bursty like this:
ool
I think you are seeing ZFS store up the writes, coalesce them, then flush to
disk every 30 seconds.
Unless the writes are synchronous, the ZIL won't be used, but the writes will
be cached instead, then flushed.
If you think about it, this is far more sane than flushing to disk every time
the
On Wed, 6 Oct 2010, Marty Scholes wrote:
If you think about it, this is far more sane than flushing to disk
every time the write() system call is used.
Yes, it dramatically diminishes the number of copy-on-write writes and
improves the pool layout efficiency. It also saves energy.
Bob
--
15 matches
Mail list logo