Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-13 Thread Roch
Sašo Kiselkov writes: On 06/12/2012 05:37 PM, Roch Bourbonnais wrote: So the xcall are necessary part of memory reclaiming, when one needs to tear down the TLB entry mapping the physical memory (which can from here on be repurposed). So the xcall are just part of this. Should

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Roch Bourbonnais
Scrubs are run at very low priority and yield very quickly in the presence of other work. So I really would not expect to see scrub create any impact on an other type of storage activity. Resilvering will more aggressively push forward on what is has to do, but resilvering does not need to

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Roch Bourbonnais
The process should be scalable. Scrub all of the data on one disk using one disk worth of IOPS Scrub all of the data on N disks using N disk's worth of IOPS. THat will take ~ the same total time. -r Le 12 juin 2012 à 08:28, Jim Klimov a écrit : 2012-06-12 16:20, Roch Bourbonnais wrote

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Roch Bourbonnais
So the xcall are necessary part of memory reclaiming, when one needs to tear down the TLB entry mapping the physical memory (which can from here on be repurposed). So the xcall are just part of this. Should not cause trouble, but they do. They consume a cpu for some time. That in turn can

Re: [zfs-discuss] Metadata (DDT) Cache Bias

2011-06-03 Thread Roch
Edward Ned Harvey writes: Based on observed behavior measuring performance of dedup, I would say, some chunk of data and its associated metadata seem have approximately the same warmness in the cache. So when the data gets evicted, the associated metadata tends to be evicted too. So

Re: [zfs-discuss] Should Intel X25-E not be used with a SAS Expander?

2011-06-02 Thread Roch Bourbonnais
Josh, I don't know the internals of the device but I have heard reports of SSDs that would ignore flush write cache commands _and_ wouldn't have a supercap protection (nor battery). Such devices are subject to dataloss. Did you also catch this thread

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Roch
Le 7 févr. 2011 à 06:25, Richard Elling a écrit : On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote: Hi all, I'm trying to achieve the same effect of UFS directio on ZFS and here is what I did: Solaris UFS directio has three functions: 1. improved async code path 2. multiple

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Roch
Le 7 févr. 2011 à 17:08, Yi Zhang a écrit : On Mon, Feb 7, 2011 at 10:26 AM, Roch roch.bourbonn...@oracle.com wrote: Le 7 févr. 2011 à 06:25, Richard Elling a écrit : On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote: Hi all, I'm trying to achieve the same effect of UFS directio on ZFS

Re: [zfs-discuss] ashift and vdevs

2010-12-01 Thread Roch
Brandon High writes: On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai mov...@gmail.com wrote: What is the upgrade path like from this? For example, currently I The ashift is set in the pool when it's created and will persist through the life of that pool. If you set it at pool creation,

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Roch
Ross Walker writes: On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote: Ross Asks: So on that note, ZFS should disable the disks' write cache, not enable them

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Roch Bourbonnais
Le 5 août 2010 à 19:49, Ross Walker a écrit : On Aug 5, 2010, at 11:10 AM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
Ross Walker writes: On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: Le 27 mai 2010 à 07:03, Brent Jones a écrit : On Wed, May 26, 2010 at 5:08 AM, Matt Connolly matt.connolly...@gmail.com wrote: I've set up an iScsi volume

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
, but got no answer: while an iSCSI target is presented WCE does it respect the flush command? Yes. I would like to say obviously but it's been anything but. -r Ross Walker writes: On Aug 4, 2010, at 3:52 AM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 3

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
Ross Walker writes: On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote: Ross Asks: So on that note, ZFS should disable the disks' write cache, not enable them despite ZFS's COW properties because it should be resilient. No, because ZFS builds

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Roch Bourbonnais
Le 27 mai 2010 à 07:03, Brent Jones a écrit : On Wed, May 26, 2010 at 5:08 AM, Matt Connolly matt.connolly...@gmail.com wrote: I've set up an iScsi volume on OpenSolaris (snv_134) with these commands: sh-4.0# zfs create rpool/iscsi sh-4.0# zfs set shareiscsi=on rpool/iscsi sh-4.0# zfs

Re: [zfs-discuss] How does zil work

2010-07-27 Thread Roch
v writes: Hi, A basic question regarding how zil works: For asynchronous write, will zil be used? For synchronous write, and if io is small, will the whole io be place on zil? or just the pointer be save into zil? what about large size io? Let me try. ZIL : code and data structure

Re: [zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-31 Thread Roch Bourbonnais
Can you post zpool status ? Are your drives all the same size ? -r Le 30 mai 2010 à 23:37, Sandon Van Ness a écrit : I just wanted to make sure this is normal and is expected. I fully expected that as the file-system filled up I would see more disk space being used than with other

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Roch
Robert Milkowski writes: On 01/04/2010 20:58, Jeroen Roodhart wrote: I'm happy to see that it is now the default and I hope this will cause the Linux NFS client implementation to be faster for conforming NFS servers. Interesting thing is that apparently defaults on Solaris

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Roch
When we use one vmod, both machines are finished in about 6min45, zilstat maxes out at about 4200 IOPS. Using four vmods it takes about 6min55, zilstat maxes out at 2200 IOPS. Can you try 4 concurrent tar to four different ZFS filesystems (same pool). -r

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Roch
to the IMAP server (called skiplist), and some are small flat files that are just rewritten. All they have in common is activity and frequent locking. They can be relocated as a whole. The second one is from: http://blogs.sun.com/roch/entry/the_dynamics_of_zfs He

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Roch
. Can one do this with raid-dp ? http://blogs.sun.com/roch/entry/need_inodes That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Roch Bourbonnais
Le 5 janv. 10 à 17:49, Robert Milkowski a écrit : On 05/01/2010 16:00, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2010-01-04 Thread Roch
Tim Cook writes: On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Sun, 27 Dec 2009, Tim Cook wrote: That is ONLY true when there's significant free space available/a fresh pool. Once those files have been deleted and the blocks put

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2009-12-28 Thread Roch Bourbonnais
Le 28 déc. 09 à 00:59, Tim Cook a écrit : On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: Le 26 déc. 09 à 04:47, Tim Cook a écrit : On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov skisel...@gmail.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2009-12-27 Thread Roch Bourbonnais
Le 26 déc. 09 à 04:47, Tim Cook a écrit : On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov skisel...@gmail.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I've started porting a video streaming application to opensolaris on ZFS, and am hitting some pretty weird performance

Re: [zfs-discuss] scrubing/resilvering - controller problem

2009-10-08 Thread Roch Bourbonnais
You might try setting zfs_scrub_limit to 1 or 2 and attach a customer service record to : 6494473 ZFS needs a way to slow down resilvering -r Le 7 oct. 09 à 06:14, John a écrit : Hi, We are running b118, with a LSI 3801 controller which is connected to 44 drives (yes it's a

Re: [zfs-discuss] ZFS ARC vs Oracle cache

2009-09-29 Thread Roch Bourbonnais
Le 28 sept. 09 à 17:58, Glenn Fawcett a écrit : Been there, done that, got the tee shirt A larger SGA will *always* be more efficient at servicing Oracle requests for blocks. You avoid going through all the IO code of Oracle and it simply reduces to a hash. Sounds like good

Re: [zfs-discuss] Checksum property change does not change pre-existing data - right?

2009-09-24 Thread Roch
Bob Friesenhahn writes: On Wed, 23 Sep 2009, Ray Clark wrote: My understanding is that if I zfs set checksum=different to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for

Re: [zfs-discuss] lots of zil_clean threads

2009-09-23 Thread Roch
I wonder if a taskq pool does not suffer from a similar effect observed for the nfsd pool : 6467988 Minimize the working set of nfsd threads Created threads round robin our of taskq loop, doing little work but wake up at least once per 5 minute and so are never reaped. -r Nils

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Roch Bourbonnais
Le 23 sept. 09 à 19:07, Neil Perrin a écrit : On 09/23/09 10:59, Scott Meilicke wrote: How can I verify if the ZIL has been disabled or not? I am trying to see how much benefit I might get by using an SSD as a ZIL. I disabled the ZIL via the ZFS Evil Tuning Guide: echo zil_disable/W0t1

Re: [zfs-discuss] How to find poor performing disks

2009-09-04 Thread Roch
Scott Lawson writes: Also you may wish to look at the output of 'iostat -xnce 1' as well. You can post those to the list if you have a specific problem. You want to be looking for error counts increasing and specifically 'asvc_t' for the service times on the disks. I higher number

Re: [zfs-discuss] ARC limits not obeyed in OSol 2009.06

2009-09-04 Thread Roch
Do you have the zfs primarycache property on this release ? if so, you could set it to 'metadata' or none. primarycache=all | none | metadata Controls what is cached in the primary cache (ARC). If this property is set to all, then both user data and metadata

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Roch
100% random writes produce around 200 IOPS with a 4-6 second pause around every 10 seconds. This indicates that the bandwidth you're able to transfer through the protocol is about 50% greater than the bandwidth the pool can offer to ZFS. Since, this is is not sustainable, you

Re: [zfs-discuss] Change the volblocksize of a ZFS volume

2009-09-04 Thread Roch
stuart anderson writes: Question : Is there a way to change the volume blocksize say via 'zfs snapshot send/receive'? As I see things, this isn't possible as the target volume (including property values) gets overwritten by 'zfs receive'.

Re: [zfs-discuss] Poor iSCSI performance [SEC=PERSONAL]

2009-09-02 Thread Roch Bourbonnais
Unlike NFS which can issue sync writes and async writes, iscsi needs to be serviced with synchronous semantics (unless the write caching is enabled, caveat emptor). If the workloads issuing the iscsi request is single threaded, then performance is governed by I/O size over rotational

Re: [zfs-discuss] surprisingly poor performance

2009-08-12 Thread Roch
roland writes: SSDs with capacitor-backed write caches seem to be fastest. how to distinguish them from ssd`s without one? i never saw this explicitly mentioned in the specs. They probably don't have one then (or they should fire their entire marketing dept). Capacitors allows the

Re: [zfs-discuss] Would ZFS will bring IO when the file is VERY short-lived?

2009-08-05 Thread Roch Bourbonnais
Le 5 août 09 à 06:06, Chookiex a écrit : Hi All, You know, ZFS afford a very Big buffer for write IO. So, When we write a file, the first stage is put it to buffer. But, if the file is VERY short-lived? Is it bring IO to disk? or else, it just put the meta data and data to memory, and then

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-08-04 Thread Roch
Bob Friesenhahn writes: On Wed, 29 Jul 2009, Jorgen Lundman wrote: For example, I know rsync and tar does not use fdsync (but dovecot does) on its close(), but does NFS make it fdsync anyway? NFS is required to do synchronous writes. This is what allows NFS clients to

Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-04 Thread Roch
C. Bergström writes: James C. McPherson wrote: An introduction to btrfs, from somebody who used to work on ZFS: http://www.osnews.com/story/21920/A_Short_History_of_btrfs *very* interesting article.. Not sure why James didn't directly link to it, but courteous of Valerie

Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-04 Thread Roch
Henk Langeveld writes: Mario Goebbels wrote: An introduction to btrfs, from somebody who used to work on ZFS: http://www.osnews.com/story/21920/A_Short_History_of_btrfs *very* interesting article.. Not sure why James didn't directly link to it, but courteous of Valerie Aurora

Re: [zfs-discuss] Need tips on zfs pool setup..

2009-08-04 Thread Roch Bourbonnais
Le 4 août 09 à 13:42, Joseph L. Casale a écrit : does anybody have some numbers on speed on sata vs 15k sas? The next chance I get, I will do a comparison. Is it really a big difference? I noticed a huge improvement when I moved a virtualized pool off a series of 7200 RPM SATA discs to

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-08-04 Thread Roch Bourbonnais
Le 26 juil. 09 à 01:34, Toby Thain a écrit : On 25-Jul-09, at 3:32 PM, Frank Middleton wrote: On 07/25/09 02:50 PM, David Magda wrote: Yes, it can be affected. If the snapshot's data structure / record is underneath the corrupted data in the tree then it won't be able to be reached.

Re: [zfs-discuss] ZFS zpool unavailable

2009-08-04 Thread Roch Bourbonnais
Try zpool import 2169223940234886392 [storage1] -r Le 4 août 09 à 15:11, David a écrit : I seem to have run into an issue with a pool I have, and haven't found a resolution yet. The box is currently running FreeBSD 7- STABLE with ZFS v13, (Open)Solaris doesn't support my raid controller.

Re: [zfs-discuss] Need tips on zfs pool setup..

2009-08-04 Thread Roch
Tim Cook writes: On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais roch.bourbonn...@sun.comwrote: Le 4 août 09 à 13:42, Joseph L. Casale a écrit : does anybody have some numbers on speed on sata vs 15k sas? The next chance I get, I will do a comparison

Re: [zfs-discuss] surprisingly poor performance

2009-07-31 Thread Roch Bourbonnais
The things I'd pay most attention to would be all single threaded 4K, 32K, and 128K writes to the raw device. Maybe sure the SSD has a capacitor and enable the write cache on the device. -r Le 5 juil. 09 à 12:06, James Lever a écrit : On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:

Re: [zfs-discuss] zfs IO scheduler

2009-07-22 Thread Roch
tester writes: Hello, Trying to understand the ZFS IO scheduler, because of the async nature it is not very apparent, can someone give a short explanation for each of these stack traces and for their frequency this is the command dd if=/dev/zero of=/test/test1/trash

Re: [zfs-discuss] Speeding up resilver on x4500

2009-07-22 Thread Roch
Stuart Anderson writes: On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote: On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson ander...@ligo.caltech.edu wrote: However, it is a bit disconcerting to have to run with reduced data protection for an entire week. While I

Re: [zfs-discuss] zio_assess

2009-07-22 Thread Roch
zio_assess went away with SPA 3.0 : 6754011 SPA 3.0: lock breakup, i/o pipeline refactoring, device failure handling You now have : zio_vdev_io_assess(zio_t *zio) Yes it's one of the last stages of the I/O pipeline (see zio_impl.h). -r tester writes: Hi, What does

Re: [zfs-discuss] [nfs-discuss] NFS, ZFS ESX

2009-07-08 Thread Roch
://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide If you do, then be prepared to unmount or reboot all clients of the server in case of a crash in order to clear their corrupted caches. This is in no way a ZIL problem nor a ZFS problem. http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine And most

Re: [zfs-discuss] zio_taskq_threads and TXG sync

2009-06-23 Thread Roch Bourbonnais
We're definitely working on problems contributing to such 'picket fencing'. But beware to equate symptoms and root caused issues. We already know that picket fencing is multicause and we're tracking the ones we know about : there is something related to taskq cpu scheduling and something

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-19 Thread Roch Bourbonnais
Le 18 juin 09 à 20:23, Richard Elling a écrit : Cor Beumer - Storage Solution Architect wrote: Hi Jose, Well it depends on the total size of your Zpool and how often these files are changed. ...and the average size of the files. For small files, it is likely that the default

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Roch Bourbonnais
Le 16 juin 09 à 19:55, Jose Martins a écrit : Hello experts, IHAC that wants to put more than 250 Million files on a single mountpoint (in a directory tree with no more than 100 files on each directory). He wants to share such filesystem by NFS and mount it through many Linux Debian clients

Re: [zfs-discuss] Data loss bug - sidelined??

2009-05-01 Thread Roch Bourbonnais
, it realizes the disk is failed, and from then enter those failmode conditions (wait, continue, panic, ?). Could this be the case? http://blogs.sun.com/roch/date/20080514 -- Brent Jones br...@servuhome.net ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.

2009-03-24 Thread Roch
Hi Noel. zpool iostat -v For a working pool and for a problem pool would help to see the type of pool and it's capacity. I assume the problem is not the source of the data. To read large number of small files typically requires lots and lots of threads (say 100 per source disks). Is

Re: [zfs-discuss] Max size of log device?

2009-03-01 Thread Roch Bourbonnais
Le 8 févr. 09 à 13:12, Vincent Fox a écrit : Thanks I think I get it now. Do you think having log on a 15K RPM drive with the main pool composed of 10K RPM drives will show worthwhile improvements? Or am I chasing a few percentage points? In cases where logzilla helps, then this

Re: [zfs-discuss] Max size of log device?

2009-03-01 Thread Roch Bourbonnais
Le 8 févr. 09 à 13:44, David Magda a écrit : On Feb 8, 2009, at 16:12, Vincent Fox wrote: Do you think having log on a 15K RPM drive with the main pool composed of 10K RPM drives will show worthwhile improvements? Or am I chasing a few percentage points? Another important question is

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Roch Bourbonnais
Sounds like the device it not ignoring the cache flush requests sent down by ZFS/zil commit. If the SSD is able the drain it's internal buffer to flash on a power outage; then it needs to ignore the cache flush. You can do this on a per device basis, It's kludgy tuning but hope the

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-26 Thread Roch
. Thanks for any pointers you may have... I think you found out for the replies, this NFS issue is not related to ZFS nor a ZIL malfunction in any way. http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine NFS (particularly lightly threaded load) is much speeded up with any form

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-26 Thread Roch
Nicholas Lee writes: Another option to look at is: set zfs:zfs_nocacheflush=1 http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide Best option is to get a a fast ZIL log device. Depends on your pool as well. NFS+ZFS means zfs will wait for write completes

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-26 Thread Roch
Eric D. Mudama writes: On Mon, Jan 19 at 23:14, Greg Mason wrote: So, what we're looking for is a way to improve performance, without disabling the ZIL, as it's my understanding that disabling the ZIL isn't exactly a safe thing to do. We're looking for the best way to improve

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-26 Thread Roch
Eric D. Mudama writes: On Tue, Jan 20 at 21:35, Eric D. Mudama wrote: On Tue, Jan 20 at 9:04, Richard Elling wrote: Yes. And I think there are many more use cases which are not yet characterized. What we do know is that using an SSD for the separate ZIL log works very well for

Re: [zfs-discuss] ZFS size is different ?

2009-01-19 Thread Roch
Chookiex writes: Hi all, I have 2 questions about ZFS. 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different: NAME USED AVAIL REFER MOUNTPOINT pool2/data2 160G 1.44T 159G

Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Roch
Tim writes: On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.eduwrote: Does creating ZFS pools on multiple partitions on the same physical drive still run into the performance and other issues that putting pools in slices does? Is zfs going to own

Re: [zfs-discuss] zfs iscsi sustained write performance

2009-01-14 Thread Roch
milosz writes: iperf test coming out fine, actually... iperf -s -w 64k iperf -c -w 64k -t 900 -i 5 [ ID] Interval Transfer Bandwidth [ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec totally steady. i could probably implement some tweaks to improve it, but

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-13 Thread Roch Bourbonnais
Le 13 janv. 09 à 21:49, Orvar Korvar a écrit : Oh, thanx for your very informative answer. Ive added a link to your information in this thread: But... Sorry, but I wrote wrong. I meant I will not recommend against HW raid + ZFS anymore instead of ... recommend against HW raid. The

Re: [zfs-discuss] SDXC and the future of ZFS

2009-01-13 Thread Roch Bourbonnais
Le 12 janv. 09 à 17:39, Carson Gaspar a écrit : Joerg Schilling wrote: Fabian Wörner fabian.woer...@googlemail.com wrote: my post was not to start a discuss gplcddl. It just an idea to promote ZFS and OPENSOLARIS If it was against anything than against exfat, nothing else!!! If you

Re: [zfs-discuss] Odd network performance with ZFS/CIFS

2009-01-12 Thread Roch Bourbonnais
Try setting the cachemode property on the target filesystem. Also verify that the source can pump data through the net at the desired rate if the target is /dev/null. -r Le 8 janv. 09 à 18:46, gnomad a écrit : I have just built an opensolaris box (2008.11) as a small fileserver (6x 1TB

Re: [zfs-discuss] zfs iscsi sustained write performance

2009-01-12 Thread Roch Bourbonnais
Le 4 janv. 09 à 21:09, milosz a écrit : thanks for your responses, guys... the nagle's tweak is the first thing i did, actually. not sure what the network limiting factors could be here... there's no switch, jumbo frames are on... maybe it's the e1000g driver? it's been wonky since

Re: [zfs-discuss] Unable to add cache device

2009-01-05 Thread Roch
Scott Laird writes: On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling richard.ell...@sun.com wrote: Scott Laird wrote: On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai mritun+opensola...@gmail.com wrote: As for source, here you go :)

Re: [zfs-discuss] zfs create performance degrades dramatically with increasing number of file systems

2009-01-05 Thread Roch
Alastair Neil writes: I am attempting to create approx 10600 zfs file systems across two pools. The devices underlying the pools are mirrored iscsi volumes shared over a dedicated gigabit Ethernet with jumbo frames enabled (MTU 9000) from a Linux Openfiler 2.3 system. I have added a

Re: [zfs-discuss] How ZFS decides if write to the slog or directly to the POOL

2009-01-05 Thread Roch
Marcelo Leal writes: Hello all, Somedays ago i was looking at the code and did see some variable that seems to make a correlation between the size of the data, and if the data is written to the slog or directly to the pool. But i did not find it anymore, and i think is way more complex

Re: [zfs-discuss] Asymmetric zpool load

2009-01-05 Thread Roch
Any experts here to say if that's just because bonnie via NFSv3 is a very special test - if it is I can start something else, suggestions? - or if some disks are really too busy and slowing down the pool. Here is my attempt : http://blogs.sun.com/roch/entry/decoding_bonnie -r

Re: [zfs-discuss] UFS over zvol major performance hit

2009-01-05 Thread Roch
Ahmed Kamal writes: Hi, I have been doing some basic performance tests, and I am getting a big hit when I run UFS over a zvol, instead of directly using zfs. Any hints or explanations is very welcome. Here's the scenario. The machine has 30G RAM, and two IDE disks attached. The disks

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-03 Thread Roch Bourbonnais
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit : Hi, I faced with a similar problem, like Ross, but still have not found a solution. I have raidz out of 9 sata disks connected to internal and 2 external sata controllers. Bonnie++ gives me the following results: nexenta,8G,

Re: [zfs-discuss] What will happen when write a block of 8k if the recordsize is 128k. Will 128k be written instead of 8k?

2009-01-02 Thread Roch Bourbonnais
HI Qihua, there are many reasons why the recordsize does not govern the I/O size directly. Metadata I/O is one, ZFS I/O scheduler aggregation is another. The application behavior might be a third. Make sure to create the DB files after modifying the ZFS property. -r Le 26 déc. 08 à 11:49,

Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Roch
Tim writes: On Sat, Nov 29, 2008 at 11:06 AM, Ray Clark [EMAIL PROTECTED]wrote: Please help me understand what you mean. There is a big difference between being unacceptably slow and not working correctly, or between being unacceptably slow and having an implementation problem that

Re: [zfs-discuss] Setting per-file record size / querying fs/file record size?

2008-12-01 Thread Roch
Bill Sommerfeld writes: On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote: I'm assuming this is local filesystem rather than ZFS backed NFS (which is what I have). Correct, on a laptop. What has setting the 32KB recordsize done for the rest of your home dir, or did

Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-12-01 Thread Roch Bourbonnais
Le 15 nov. 08 à 08:49, Nicholas Lee a écrit : On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED] wrote: In short, separate logs with rotating rust may reduce sync write latency by perhaps 2-10x on an otherwise busy system. Using write optimized SSDs will reduce sync

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-11-27 Thread Roch Bourbonnais
Le 22 oct. 08 à 21:02, Bill Sommerfeld a écrit : On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote: If I turn zfs compression on, does the recordsize influence the compressratio in anyway? zfs conceptually chops the data into recordsize chunks, then compresses each chunk

Re: [zfs-discuss] Improving zfs send performance

2008-11-12 Thread Roch
Thomas, for long latency fat links, it should be quite beneficial to set the socket buffer on the receive side (instead of having users tune tcp_recv_hiwat). throughput of a tcp connnection is gated by receive socket buffer / round trip time. Could that be Ross' problem ? -r Ross Smith

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-25 Thread Roch Bourbonnais
Le 23 oct. 08 à 05:40, Constantin Gonzalez a écrit : Hi, Bob Friesenhahn wrote: On Wed, 22 Oct 2008, Neil Perrin wrote: On 10/22/08 10:26, Constantin Gonzalez wrote: 3. Disable ZIL[1]. This is of course evil, but one customer pointed out to me that if a tar xvf were writing locally

Re: [zfs-discuss] Terrible performance when setting zfs_arc_max snv_98

2008-10-19 Thread Roch Bourbonnais
Le 2 oct. 08 à 09:21, Christiaan Willemsen a écrit : Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-17 Thread Roch Bourbonnais
Leave the default recordsize. With 128K recordsize, files smaller than 128K are stored as single record tightly fitted to the smallest possible # of disk sectors. Reads and writes are then managed with fewer ops. Not tuning the recordsize is very generally more space efficient and more

Re: [zfs-discuss] about variable block size

2008-10-13 Thread Roch Bourbonnais
Files are stored as either a single record (ajusted to the size of the file) multiple number of fixed size records. -r Le 25 août 08 à 09:21, Robert a écrit : Thanks for your response, from which I have known more details. However, there is one thing I am still not clear--maybe at first

Re: [zfs-discuss] Poor read/write performance when using ZFS iSCSI target

2008-08-18 Thread Roch - PAE
initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0 count=100 So this is going at 3000 x 1K writes per second or 330usec per write. The iscsi target is probably doing a over the wire operation for each request. So it looks fine at first glance. -r Cody Campbell writes:

Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-08-18 Thread Roch - PAE
Kyle McDonald writes: Ross wrote: Just re-read that and it's badly phrased. What I meant to say is that a raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10 chance of loosing some data during a full rebuild. Actually, I think it's been

Re: [zfs-discuss] zfs_nocacheflush

2008-07-31 Thread Roch - PAE
Peter Tribble writes: A question regarding zfs_nocacheflush: The Evil Tuning Guide says to only enable this if every device is protected by NVRAM. However, is it safe to enable zfs_nocacheflush when I also have local drives (the internal system drives) using ZFS, in particular if

Re: [zfs-discuss] Periodic flush

2008-07-01 Thread Roch - PAE
Robert Milkowski writes: Hello Roch, Saturday, June 28, 2008, 11:25:17 AM, you wrote: RB I suspect, a single dd is cpu bound. I don't think so. We're nearly so as you show. More below. Se below one with a stripe of 48x disks again. Single dd with 1024k block size

Re: [zfs-discuss] Periodic flush

2008-06-28 Thread Roch Bourbonnais
Le 28 juin 08 à 05:14, Robert Milkowski a écrit : Hello Mark, Tuesday, April 15, 2008, 8:32:32 PM, you wrote: MM The new write throttle code put back into build 87 attempts to MM smooth out the process. We now measure the amount of time it takes MM to sync each transaction group, and

Re: [zfs-discuss] Periodic flush

2008-05-15 Thread Roch - PAE
Bob Friesenhahn writes: On Tue, 15 Apr 2008, Mark Maybee wrote: going to take 12sec to get this data onto the disk. This impedance mis-match is going to manifest as pauses: the application fills the pipe, then waits for the pipe to empty, then starts writing again. Note that this

Re: [zfs-discuss] zfs device busy

2008-04-04 Thread Roch Bourbonnais
Le 30 mars 08 à 15:57, Kyle McDonald a écrit : Fred Oliver wrote: Marion Hakanson wrote: [EMAIL PROTECTED] said: I am having trouble destroying a zfs file system (device busy) and fuser isn't telling me who has the file open: . . . This situation appears to occur every night during a

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Roch Bourbonnais
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between the disks, or do both mirrored sets of data and parity get read off of both

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Roch Bourbonnais
Le 28 févr. 08 à 21:00, Jonathan Loran a écrit : Roch Bourbonnais wrote: Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between

Re: [zfs-discuss] The old problem with tar, zfs, nfs and zil

2008-02-26 Thread Roch Bourbonnais
I would imagine that linux to behave more like ZFS that does not flush caches. (google Evil zfs_nocacheflush). If you can nfs tar extract files on linux faster than one file per rotation latency; that is suspicious. -r Le 26 févr. 08 à 13:16, msl a écrit : For Linux NFS service, it's a

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-18 Thread Roch - PAE
Bob Friesenhahn writes: On Fri, 15 Feb 2008, Roch Bourbonnais wrote: What was the interlace on the LUN ? The question was about LUN interlace not interface. 128K to 1M works better. The segment size is set to 128K. The max the 2540 allows is 512K. Unfortunately

Re: [zfs-discuss] Which DTrace provider to use

2008-02-15 Thread Roch Bourbonnais
Le 14 févr. 08 à 02:22, Marion Hakanson a écrit : [EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit : On Thu, 14 Feb 2008, Tim wrote: If you're going for best single file write performance, why are you doing mirrors of the LUNs? Perhaps I'm misunderstanding why you went from one giant raid-0 to what is essentially a raid-10. That

Re: [zfs-discuss] ZFS taking up to 80 seconds to flush a single 8KB O_SYNC block.

2008-02-15 Thread Roch Bourbonnais
Le 10 févr. 08 à 12:51, Robert Milkowski a écrit : Hello Nathan, Thursday, February 7, 2008, 6:54:39 AM, you wrote: NK For kicks, I disabled the ZIL: zil_disable/W0t1, and that made not a NK pinch of difference. :) Have you exported and them imported pool to get zil_disable into

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 11:38, Philip Beevers a écrit : Hi everyone, This is my first post to zfs-discuss, so be gentle with me :-) I've been doing some testing with ZFS - in particular, in checkpointing the large, proprietary in-memory database which is a key part of the application I work

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit : On Fri, 15 Feb 2008, Roch Bourbonnais wrote: As mentioned before, the write rate peaked at 200MB/second using RAID-0 across 12 disks exported as one big LUN. What was the interlace on the LUN ? The question was about LUN interlace

  1   2   3   4   >