Re: [zfs-discuss] [dtrace-discuss] dtrace nfs requests on a zfs filesystem
On Wed, Jul 20, 2011 at 7:10 AM, wessels wessels...@gmail.com wrote: I'm issuing the following statement on a ONNV_104 (I know a bit old but very stable) NFS server: # dtrace -n 'nfsv3:::op-read-start,nfsv3:::op-write-start {@[probefunc,args[1]-noi_curpath]=count(); }' which works fine...most of the time but not always. Usually it resolve's the filenames on which the I/O are done but not always. It displays unknown as filename. There's this: http://mail.opensolaris.org/pipermail/dtrace-discuss/2010-February/008527.html -- Dave ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Incremental backup via zfs send / zfs receive
Frank Middleton wrote: The problem with the regular stream is that most of the file system properties (such as mountpoint) are not copied as they are with a recursive stream. This may seem an advantage to some, (e.g., if the remote mountpoint is already in use, the mountpoint seems to default to legacy). However, did I miss anything in the documentation, or would it be worth submitting an RFE for an option to send/recv properties in a non-recursive stream? This is 6839260 want zfs send with properties -- Dave -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?
Chookiex wrote: thank you ;) I mean that it would be faster in reading compressed data IF the write with compression is faster than non-compressed? Just like lzjb. Do you mean that it would be faster to read compressed data than uncompressed data, or it would be faster to read compressed data than to write it? But i can't understand why the read performance is generally unaffected by compression? Because the uncompression (lzjb, gzip) is faster than compression in algorithm, so I think reading the compressing data would need more less CPU time. So the conclusion in the blog that read performance is generally unaffected by compression, I'm not agreed with it. Except the ARC cached the data in the read test and there are no random read test? My comment was just an empirical observation: in my experiments, read time was basically unaffected. I don't believe this was a result of ARC caching because I constructed the experiments to avoid that altogether by using working sets larger than the ARC and streaming through the data. In my case the system's read bandwidth wasn't a performance limiter. We know this because the write bandwidth was much higher (see the graphs), and we were writing twice as much data as we were reading (because we were mirroring). So even if compression was decreasing the amount of I/O that was done on the read side, other factors (possibly the number of clients) limited the bandwidth we could achieve before we got to a point where compression would have made any difference. -- Dave My data is text data set, about 320,000 text files or emails. The compression ratio is: lzjb 1.55x gzip-1 2.54x gzip-2 2.58x gzip 2.72x gzip-9 2.73x for your curiosity :) *From:* David Pacheco david.pach...@sun.com *To:* Chookiex hexcoo...@yahoo.com *Cc:* zfs-discuss@opensolaris.org *Sent:* Thursday, June 25, 2009 2:00:49 AM *Subject:* Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput? Chookiex wrote: Thank you for your reply. I had read the blog. The most interesting thing is WHY is there no performance improve when it set any compression? There are many potential reasons, so I'd first try to identify what your current bandwidth limiter is. If you're running out of CPU on your current workload, for example, adding compression is not going to help performance. If this is over a network, you could be saturating the link. Or you might not have enough threads to drive the system to bandwidth. Compression will only help performance if you've got plenty of CPU and other resources but you're out of disk bandwidth. But even if that's the case, it's possible that compression doesn't save enough space that you actually decrease the number of disk I/Os that need to be done. The compressed read I/O is less than uncompressed data, and decompress is faster than compress. Out of curiosity, what's the compression ratio? -- Dave so if lzjb write is better than non-compressed, the lzjb read would be better than write? Is the ARC or L2ARC do any tricks? Thanks *From:* David Pacheco david.pach...@sun.com mailto:david.pach...@sun.com *To:* Chookiex hexcoo...@yahoo.com mailto:hexcoo...@yahoo.com *Cc:* zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org *Sent:* Wednesday, June 24, 2009 4:53:37 AM *Subject:* Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput? Chookiex wrote: Hi all. Because the property compression could decrease the file size, and the file IO will be decreased also. So, would it increase the ZFS I/O throughput with compression? for example: I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM. It could compress my files with compressratio 2.5x+. could it be? or I turn on lzjb, about 1.5x with the same files. It's possible, but it depends on a lot of factors, including what your bottleneck is to begin with, how compressible your data is, and how hard you want the system to work compressing it. With gzip-9, I'd be shocked if you saw bandwidth improved. It seems more common with lzjb: http://blogs.sun.com/dap/entry/zfs_compression (skip down to the results) -- Dave could it be? Is there anyone have a idea? thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ -- David Pacheco, Sun Microsystems Fishworks.http
Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?
Chookiex wrote: Thank you for your reply. I had read the blog. The most interesting thing is WHY is there no performance improve when it set any compression? There are many potential reasons, so I'd first try to identify what your current bandwidth limiter is. If you're running out of CPU on your current workload, for example, adding compression is not going to help performance. If this is over a network, you could be saturating the link. Or you might not have enough threads to drive the system to bandwidth. Compression will only help performance if you've got plenty of CPU and other resources but you're out of disk bandwidth. But even if that's the case, it's possible that compression doesn't save enough space that you actually decrease the number of disk I/Os that need to be done. The compressed read I/O is less than uncompressed data, and decompress is faster than compress. Out of curiosity, what's the compression ratio? -- Dave so if lzjb write is better than non-compressed, the lzjb read would be better than write? Is the ARC or L2ARC do any tricks? Thanks *From:* David Pacheco david.pach...@sun.com *To:* Chookiex hexcoo...@yahoo.com *Cc:* zfs-discuss@opensolaris.org *Sent:* Wednesday, June 24, 2009 4:53:37 AM *Subject:* Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput? Chookiex wrote: Hi all. Because the property compression could decrease the file size, and the file IO will be decreased also. So, would it increase the ZFS I/O throughput with compression? for example: I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM. It could compress my files with compressratio 2.5x+. could it be? or I turn on lzjb, about 1.5x with the same files. It's possible, but it depends on a lot of factors, including what your bottleneck is to begin with, how compressible your data is, and how hard you want the system to work compressing it. With gzip-9, I'd be shocked if you saw bandwidth improved. It seems more common with lzjb: http://blogs.sun.com/dap/entry/zfs_compression (skip down to the results) -- Dave could it be? Is there anyone have a idea? thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- David Pacheco, Sun Microsystems Fishworks.http://blogs.sun.com/dap/ -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?
Chookiex wrote: Hi all. Because the property compression could decrease the file size, and the file IO will be decreased also. So, would it increase the ZFS I/O throughput with compression? for example: I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM. It could compress my files with compressratio 2.5x+. could it be? or I turn on lzjb, about 1.5x with the same files. It's possible, but it depends on a lot of factors, including what your bottleneck is to begin with, how compressible your data is, and how hard you want the system to work compressing it. With gzip-9, I'd be shocked if you saw bandwidth improved. It seems more common with lzjb: http://blogs.sun.com/dap/entry/zfs_compression (skip down to the results) -- Dave could it be? Is there anyone have a idea? thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] `zfs list` doesn't show my snapshot
Pawel Tecza wrote: But I still don't understand why `zfs list` doesn't display snapshots by default. I saw it in the Net many times at the examples of zfs usage. This was PSARC/2008/469 - excluding snapshot info from 'zfs list' http://opensolaris.org/os/community/on/flag-days/pages/2008091003/ -- Dave -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] continuous replication
Brent Jones wrote: *snip* a 'zfs send' on the sending host monitors the pool/filesystem for changes, and immediately sends them to the receiving host, which applies the change to the remote pool. This is asynchronous, and isn't really different from running zfs send/recv in a loop. Whether the loop is in userland or in the kernel, either way you're continuously pushing changes across the wire. presumably, if fishworks is based on (Open)Solaris, any new ZFS features they created will make it back into Solaris proper eventually... Replication in the 7000 series is mostly built _on top of_ the existing ZFS infrastructure. Sun advertises Active/Active replication on the 7000, how is this possible? Can send/receive operate bi-directional so changes on either reflect on both sides? I always visualized send/receive only being beneficial in Active/Passive situations, where you must only perform operations on the primary, and should fail over occur, you switch to the secondary. I think you're confusing our clustering feature with the remote replication feature. With active-active clustering, you have two closely linked head nodes serving files from different zpools using JBODs connected to both head nodes. When one fails, the other imports the failed node's pool and can then serve those files. With remote replication, one appliance sends filesystems and volumes across the network to an otherwise separate appliance. Neither of these is performing synchronous data replication, though. For more clustering, I'll refer you to Keith's blog: http://blogs.sun.com/wesolows/entry/low_availability_clusters -- Dave -- David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] scrub performance
Stuart Anderson wrote: On Thu, Mar 06, 2008 at 11:51:00AM -0800, Stuart Anderson wrote: I currently have an X4500 running S10U4 with the latest ZFS uber patch (127729-07) for which zpool scrub is making very slow progress even though the necessary resources are apparently available. Currently it has It is also interesting to note that this system is now making negative progress. I can understand the remaining time estimate going up with time, but what does it mean for the % complete number to go down after 6 hours of work? Thanks. # zpool status | egrep -e progress|errors ; date scrub: scrub in progress, 75.49% done, 28h51m to go errors: No known data errors Thu Mar 6 08:50:59 PST 2008 # zpool status | egrep -e progress|errors ; date scrub: scrub in progress, 75.24% done, 31h20m to go errors: No known data errors Thu Mar 6 15:15:39 PST 2008 There are a few things which may cause the scrub to restart. See: 6655927 zpool status causes a resilver or scrub to restart 6343667 scrub/resilver has to start over when a snapshot is taken Sorry the latter doesn't have a useful description, but the synopsis says it all: taking snapshots causes scrubs to restart. Either of these may explain the negative progress. -- David Pacheco, Sun Microsystems ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss