Hi,
currently I'm trying to debug a very strange phenomenon on a nearly full
pool (96%). Here are the symptoms: over NFS, a find on the pool takes
a very long time, up to 30s (!) for each file. Locally, the performance
is quite normal.
What I found out so far: It seems that every nfs write
Arne,
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log chain to
achieve this.
If ever it fails to allocate a block (of the size requested) it it forced
I should also have mentioned that if the pool has a separate log device
then this shouldn't happen.Assuming the slog is big enough then it
it should have enough blocks to not be forced into using main pool
device blocks.
Neil.
On 09/09/10 10:36, Neil Perrin wrote:
Arne,
NFS often demands
Hi Neil,
Neil Perrin wrote:
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log chain to
achieve this.
If ever it fails to allocate a block (of the size
On Sep 9, 2010, at 10:09 AM, Arne Jansen wrote:
Hi Neil,
Neil Perrin wrote:
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log chain to
achieve
Richard Elling wrote:
On Sep 9, 2010, at 10:09 AM, Arne Jansen wrote:
Hi Neil,
Neil Perrin wrote:
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log
Hi!
I searched the web for hours, trying to solve the NFS/ZFS low performance issue
on my just setup OSOL box (snv134). The problem is discussed in many threads
but I've found no solution.
On a nfs shared volume, I get write performance of 3,5M/sec (!!) read
performance is about 50M/sec
On Wed, Sep 08, 2010 at 01:20:58PM -0700, Dr. Martin Mundschenk wrote:
Hi!
I searched the web for hours, trying to solve the NFS/ZFS low
performance issue on my just setup OSOL box (snv134). The problem is
discussed in many threads but I've found no solution.
On a nfs shared volume, I
On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good reason to use iSCSI, if you're limited
to gigabit but need to be able to
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good
mg == Mike Gerdts mger...@gmail.com writes:
sw == Saxon, Will will.sa...@sage.com writes:
sw I think there may be very good reason to use iSCSI, if you're
sw limited to gigabit but need to be able to handle higher
sw throughput for a single client.
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin car...@ivy.net wrote:
mg == Mike Gerdts mger...@gmail.com writes:
mg it is rather common to have multiple 1 Gb links to
mg servers going to disparate switches so as to provide
mg resilience in the face of switch failures. This is not
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What about mirroring? Do I need mirrored ZIL devices in case of a power
outage?
You don't need mirroring for the sake of *power outage* but you *do* need
mirroring for the
...@opensolaris.org] On Behalf Of
Garrett D'Amore
Sent: Friday, July 23, 2010 11:46 PM
To: Edward Ned Harvey
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] NFS performance?
Fundamentally, my recommendation is to choose NFS if your clients can
use it. You'll get a lot of potential advantages
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good reason to use iSCSI, if you're limited
to gigabit but need to be able to handle higher throughput for a
single client. I may be wrong, but I believe iSCSI to/from a single
initiator can take advantage of
On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good reason to use iSCSI, if you're limited
to gigabit but need to be able to handle higher throughput for a
single client. I may be
On Sat, 2010-07-24 at 19:54 -0400, Edward Ned Harvey wrote:
From: Garrett D'Amore [mailto:garr...@nexenta.com]
Fundamentally, my recommendation is to choose NFS if your clients can
use it. You'll get a lot of potential advantages in the NFS/zfs
integration, so better performance. Plus
Hi,
I've been searching around on the Internet to fine some help with this, but
have been
unsuccessfull so far.
I have some performance issues with my file server. I have an OpenSolaris
server with a Pentium D
3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB
SATA
That's because NFS adds synchronous writes to the mix (e.g. the client needs to
know certain transactions made it to nonvolatile storage in case the server
restarts etc). The simplest safe solution, although not cheap, is to add an SSD
log device to the pool.
On 23 Jul 2010, at 08:11, Sigbjorn
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with this, but
have been
unsuccessfull so far.
I have some performance issues with my file server. I have an OpenSolaris
server with a Pentium D
3GHz
Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com
mailto:sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with
this, but have been
unsuccessfull so far.
I have some performance issues with
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of 5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am getting 2-5MB on that too.
--
This message posted from opensolaris.org
On 23 Jul 2010, at 09:18, Andrew Gabriel andrew.gabr...@oracle.com wrote:
Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com
mailto:sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with
this, but
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would be
the way for me. I see
these exists in different prices. Any reason why I would not buy a cheap one?
Like the Intel X25-V
SSD 40GB 2,5?
What size of ZIL
On Fri, July 23, 2010 10:42, tomwaters wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of
5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
getting 2-5MB on that too. --
This
Sent from my iPhone
On 23 Jul 2010, at 09:42, tomwaters tomwat...@chadmail.com wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of 5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
On 23/07/2010 10:02, Sigbjorn Lie wrote:
On Fri, July 23, 2010 10:42, tomwaters wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of
5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would
be the way for me. I see
these exists in different prices. Any reason why I would
On Fri, July 23, 2010 11:21, Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would
be the way for me. I see
On 23/07/2010 10:53, Sigbjorn Lie wrote:
The X25-V has up to 25k random read iops and up to 2.5k random write iops per
second, so that
would seem okay for approx $80. :)
What about mirroring? Do I need mirrored ZIL devices in case of a power outage?
Note there is not a ZIL device, there is a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
Milkowski and Neil Perrin's zil synchronicity [PSARC/2010/108] changes
with sync=disabled, when the changes work their way into an available
The fact that people run unsafe
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What size of ZIL device would be recommened for my pool consisting for
Get the smallest one. Even an unrealistic high performance scenario cannot
come close to using 32G. I am
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What about mirroring? Do I need mirrored ZIL devices in case of a power
outage?
You don't need mirroring for the sake of *power outage* but you *do* need
mirroring for the sake
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
Milkowski and Neil Perrin's zil synchronicity [PSARC/2010/108] changes
with sync=disabled, when the changes work their way into an available
The fact that
Phil Harmon wrote:
Not the thread hijack, but I assume a SSD ZIL will similarly improve
an iSCSI target...as I am getting 2-5MB on that too.
Yes, it generally will. I've seen some huge improvements with iSCSI,
but YMMV depending on your config, application and workload.
Sorry this isn't
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
On a related note - all other things being equal, is there any reason
to choose NFS over ISCI, or vice-versa? I'm currently looking at this
iscsi and NFS are completely
Fundamentally, my recommendation is to choose NFS if your clients can
use it. You'll get a lot of potential advantages in the NFS/zfs
integration, so better performance. Plus you can serve multiple
clients, etc.
The only reason to use iSCSI is when you don't have a choice, IMO. You
should only
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tomas Ögren wrote:
| To get similar (lower) consistency guarantees, try disabling ZIL..
| google://zil_disable .. This should up the speed, but might cause disk
| corruption if the server crashes while a client is writing data.. (just
| like with UFS)
I also test the nfs with 'zfs set sharenfs=on' performance with a linux client.
By echo zil_disable/W0t1 | mdb -kw the small files from nfs speed up 10x.
about zil_disable,see Eric Kustarz's blog:
http://blogs.sun.com/erickustarz/entry/zil_disable
This message posted from opensolaris.org
Tomas Ögren wrote:
On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes:
I realize that this topic has been fairly well beaten to death on this
forum, but I've also read numerous comments from ZFS developers that they'd
like to hear about significantly different performance numbers
Hello Darren,
DJM BTW there isn't really any such think as disk corruption there is
DJM data corruption :-)
Well, if you scratch it hard enough :)
--
Best regards,
Robert Milkowski mailto:[EMAIL PROTECTED]
Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
Be careful, things like this can result in device corruption!
Jörg
--
EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
[EMAIL PROTECTED](uni)
[EMAIL
Robert Milkowski wrote:
Hello Darren,
DJM BTW there isn't really any such think as disk corruption there is
DJM data corruption :-)
Well, if you scratch it hard enough :)
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
___
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
NFS-exported filesystems, so here's one more.
The server is an
Steve Hillman wrote:
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
NFS-exported filesystems, so here's one
On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes:
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
Ok, i have proposed, so, i'm trying to implement it. :)
I hope you can (at least) criticizing it. :))
The document is here: http://www.posix.brte.com.br/blog/?p=89
It is not complete, i'm running some tests yet, and analyzing the results. But
i think you can look and contribute with tome
Hello all...
I think all of you agree that performance is a great topic in NFS.
So, when we talk about NFS and ZFS we imagine a great combination/solution.
But one is not dependent on another, actually are two well distinct
technologies. ZFS has a lot of features that all we know about, and
48 matches
Mail list logo