[zfs-discuss] zfs performance issue

2010-05-10 Thread Abhishek Gupta
Hi, I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2 with a few slices on a single disk. I was expecting a good read/write performance but I got the speed of 12-15MBps. How can I enhance the read/write performance of my raid? Thanks, Abhi.

Re: [zfs-discuss] zfs performance issue

2010-05-10 Thread Erik Trimble
Abhishek Gupta wrote: Hi, I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2 with a few slices on a single disk. I was expecting a good read/write performance but I got the speed of 12-15MBps. How can I enhance the read/write performance of my raid? Thanks, Abhi. You

Re: [zfs-discuss] ZFS Performance Issue

2008-02-13 Thread William Fretts-Saxton
After working with Sanjeev, and putting in a bunch of timing statement throughout the code, it turns out that file writes ARE NOT the bottleneck, as would be assumed. It is actually reading the file into a byte buffer that is the culprit. Specifically, this java command: byteBuffer =

Re: [zfs-discuss] ZFS Performance Issue

2008-02-11 Thread William Fretts-Saxton
It does. The file size is limited to the original creation size, which is 65k for files with 1 data sample. Unfortunately, I have zero experience with dtrace and only a little with truss. I'm relying on the dtrace scripts from people on this thread to get by for now! This message posted

Re: [zfs-discuss] ZFS Performance Issue

2008-02-11 Thread johansen
Is deleting the old files/directories in the ZFS file system sufficient or do I need to destroy/recreate the pool and/or file system itself? I've been doing the former. The former should be sufficient, it's not necessary to destroy the pool. -j

Re: [zfs-discuss] ZFS Performance Issue

2008-02-11 Thread William Fretts-Saxton
I ran this dtrace script and got no output. Any ideas? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Performance Issue

2008-02-10 Thread Robert Milkowski
Hello William, Thursday, February 7, 2008, 7:46:51 PM, you wrote: WFS -Setting zfs_nocacheflush, though got me drastically increased WFS throughput--client requests took, on average, less than 2 seconds each! That's interesting - a bug in scsi driver for v40z? -- Best regards, Robert

Re: [zfs-discuss] ZFS Performance Issue

2008-02-10 Thread Johan Hartzenberg
On Feb 5, 2008 9:52 PM, William Fretts-Saxton [EMAIL PROTECTED] wrote: This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it,

Re: [zfs-discuss] ZFS Performance Issue

2008-02-09 Thread Henk Langeveld
William Fretts-Saxton wrote: Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? and The I/O is actually done by RRD4J, [...] a Java version of 'rrdtool' If it

Re: [zfs-discuss] ZFS Performance Issue

2008-02-08 Thread William Fretts-Saxton
We are going to get a 6120 for this temporarily. If all goes well, we are going to move to a 6140 SAN solution. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS Performance Issue

2008-02-08 Thread William Fretts-Saxton
Hi Daniel. I take it you are an RRD4J user? I didn't see anything in the performance issues area that would help. Please let me know if I'm missing something: - The default of RRD4J is to use NIO backend, so that is already in place. - Pooling won't help because there is almost never a time

Re: [zfs-discuss] ZFS Performance Issue

2008-02-08 Thread William Fretts-Saxton
The other thing to keep in mind is that the tunables like compression and recsize only affect newly written blocks. If you have a bunch of data that was already laid down on disk and then you change the tunable, this will only cause new blocks to have the new size. If you experiment ith

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
I just installed nv82 so we'll see how that goes. I'm going to try the recordsize idea above as well. A note about UFS: I was told by our local Admin guru that ZFS turns on write-caching for disks, which is something that a UFS file system should not have turned on, so that if I convert the

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? This is binary data, so I don't know if that makes a difference, but the average write size is a lot smaller than

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
To avoid making multiple posts, I'll just write everything here: -Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was the issue. -Disabling ZIL didn't do anything either -Still playing with 'recsize' values but it doesn't seem to be doing much...I don't think I have a

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
Slight correction. 'recsize' must be a power of 2 so it would be 8192. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
One thing I just observed is that the initial file size is 65796 bytes. When it gets an update, the file size remains @ 65796. Is there a minimum file size? This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
RRD4J isn't a DB, per se, so it doesn't really have a record size. In fact, I don't even know if, when data is written to the binary, whether it is contiguous or not so the amount written may not directly correlate to a proper record-size. I did run your command and found the size patterns

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Sanjeev Bagewadi
William, It should be fairly easy to find the record size using DTrace. Take an aggregation of the the writes happening (aggregate on size for all the write(2) system calls). This would give fair idea of the IO size pattern. Does RRD4J have a record size mentioned ? Usually if it is a

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread johansen
-Still playing with 'recsize' values but it doesn't seem to be doing much...I don't think I have a good understand of what exactly is being written...I think the whole file might be overwritten each time because it's in binary format. The other thing to keep in mind is that the tunables like

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Vincent Fox
-Setting zfs_nocacheflush, though got me drastically increased throughput--client requests took, on average, less than 2 seconds each! So, in order to use this, I should have a storage array, w/battery backup, instead of using the internal drives, correct? I have the option of using a

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Daniel Cheng
William Fretts-Saxton wrote: Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? This is binary data, so I don't know if that makes a difference, but the average

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread William Fretts-Saxton
I disabled file prefetch and there was no effect. Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For some reason, though, iostat is showing 5x as much disk writing (to the physical disks)

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Will Murnane
On Feb 6, 2008 6:36 PM, William Fretts-Saxton [EMAIL PROTECTED] wrote: Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For some reason, though, iostat is showing 5x as much disk writing (to

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread William Fretts-Saxton
It is a striped/mirror: # zpool status NAMESTATE READ WRITE CKSUM pool1 ONLINE 0 0 0 mirrorONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 mirror

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Vincent Fox
Solaris 10u4 eh? Sounds a lot like fsync issues we want into, trying to run Cyrus mail-server spools in ZFS. This was highlighted for us by the filebench software varmail test. OpenSolaris nv78 however worked very well. This message posted from opensolaris.org

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marc Bevand
William Fretts-Saxton william.fretts.saxton at sun.com writes: I disabled file prefetch and there was no effect. Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For some reason, though,

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Neil Perrin
Marc Bevand wrote: William Fretts-Saxton william.fretts.saxton at sun.com writes: I disabled file prefetch and there was no effect. Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marion Hakanson
[EMAIL PROTECTED] said: Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For some reason, though, iostat is showing 5x as much disk writing (to the physical disks) on the ZFS partition. Can

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marc Bevand
Neil Perrin Neil.Perrin at Sun.COM writes: The ZIL doesn't do a lot of extra IO. It usually just does one write per synchronous request and will batch up multiple writes into the same log block if possible. Ok. I was wrong then. Well, William, I think Marion Hakanson has the most plausible

[zfs-discuss] ZFS Performance Issue

2008-02-05 Thread William Fretts-Saxton
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction

Re: [zfs-discuss] ZFS Performance Issue

2008-02-05 Thread William Fretts-Saxton
Some more information about the system. NOTE: Cpu utilization never goes above 10%. Sun Fire v40z 4 x 2.4 GHz proc 8 GB memory 3 x 146 GB Seagate Drives (10k RPM) 1 x 146 GB Fujitsu Drive (10k RPM) This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS Performance Issue

2008-02-05 Thread Marc Bevand
William Fretts-Saxton william.fretts.saxton at sun.com writes: Some more information about the system. NOTE: Cpu utilization never goes above 10%. Sun Fire v40z 4 x 2.4 GHz proc 8 GB memory 3 x 146 GB Seagate Drives (10k RPM) 1 x 146 GB Fujitsu Drive (10k RPM) And what version of