Trevor/all,
We've been timing the copying of actual data (1GB of assorted files,
generally 1MB with numerous larger files thrown in) in an attempt to
simulate real world use. We've been copying different sets of data around
to try and avoid anything being cached anywhere.
I don't recall the specific numbers, but local reading/writing on the x4500
was definitely well over what can be theoretically pushed through a gig-e
line; so I'm pretty convinced the problem is either with the ZFS+NFS combo
or NFS, rather than with ZFS alone.
I'll do some OpenSolaris - OpenSolaris testing tonight and see what
happens.
Thanks for the replies, appreciate the help!
On Tue, Oct 20, 2009 at 1:43 PM, Trevor Pretty trevor_pre...@eagle.co.nzwrote:
Gary
Where you measuring the Linux NFS write performance? It's well know that
Linux can use NFS in a very unsafe mode and report the write complete when
it is not all the way to safe storage. This is often reported as Solaris has
slow NFS write performance. This link does not mention NFS v4 but you might
want to check. http://nfs.sourceforge.net/
What's the write performance like between the two OpenSolaris systems?
Richard Elling wrote:
cross-posting to nfs-discuss
On Oct 20, 2009, at 10:35 AM, Gary Gogick wrote:
Heya all,
I'm working on testing ZFS with NFS, and I could use some guidance -
read speeds are a bit less than I expected.
Over a gig-e line, we're seeing ~30 MB/s reads on average - doesn't
seem to matter if we're doing large numbers of small files or small
numbers of large files, the speed seems to top out there. We've
disabled pre-fetching, which may be having some affect on read
speads, but proved necessary due to severe performance issues on
database reads with it enabled. (Reading from the DB with pre-
fetching enabled was taking 4-5 times as long than with it disabled.)
What is the performance when reading locally (eliminate NFS from the
equation)?
-- richard
Write speed seems to be fine. Testing is showing ~95 MB/s, which
seems pretty decent considering there's been no real network tuning
done.
The NFS server we're testing is a Sun x4500, configured with a
storage pool consisting of 20x 2-disk mirrors, using separate SSD
for logging. It's running the latest version of Nexenta Core.
(We've also got a second x4500 in with a raidZ2 config, running
OpenSolaris proper, showing the same issues with reads.)
We're using NFS v4 via TCP, serving various Linux clients (the
majority are CentOS 5.3). Connectivity is presently provided by a
single gigabit ethernet link; entirely conventional configuration
(no jumbo frames/etc).
Our workload is pretty read heavy; we're serving both website assets
and databases via NFS. The majority of files being served are small
( 1MB). The databases are MySQL/InnoDB, with the data in separate
zfs filesystems with a record size of 16k. The website assets/etc.
are in zfs filesystems with the default record size. On the
database server side of things, we've disabled InnoDB's double write
buffer.
I'm wondering if there's any other tuning that'd be a good idea for
ZFS in this situation, or if there's some NFS tuning that should be
done when dealing specifically with ZFS. Any advice would be
greatly appreciated.
Thanks,
--
--
Gary Gogick
senior systems administrator | workhabit,inc.
// email: g...@workhabit.com | web: http://www.workhabit.com
// office: 866-workhabit | fax: 919-552-9690
--
___
zfs-discuss mailing
listzfs-disc...@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing
listzfs-disc...@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss
*
*
www.eagle.co.nz
This email is confidential and may be legally privileged. If received in
error please destroy and immediately notify us.
--
--
Gary Gogick
senior systems administrator | workhabit,inc.
// email: g...@workhabit.com | web: http://www.workhabit.com
// office: 866-workhabit | fax: 919-552-9690
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss