Re: [zfs-discuss] how to replace failed vdev on non redundant pool?

2010-10-15 Thread Scott Meilicke
burst a mighty flame. -Dante Alighieri ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-13 Thread Scott Meilicke
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] Bursty writes - why?

2010-10-12 Thread Scott Meilicke
, while writes to the ZIL/SLOG will be more random (in order to commit quickly)? Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Scott Meilicke
. At some point you have to rely on your backups for the unexpected and unforeseen. Make sure they are good! Michael, nice reliability write up! -- Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] [RFC] Backup solution

2010-10-07 Thread Scott Meilicke
@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Scott Meilicke
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] When is it okay to turn off the verify option.

2010-10-04 Thread Scott Meilicke
, before it got corrupted by your system, happens to be saved somewhere else before it reached your system. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
Has it been running long? Initially the numbers are way off. After a while it settles down into something reasonable. How many disks, and what size, are in your raidz2? -Scott On 9/29/10 8:36 AM, LIC mesh licm...@gmail.com wrote: Is there any way to stop a resilver? We gotta stop this

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
in) and restarts. Never gets past 0.00% completion, and K resilvered on any LUN. 64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs. On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke scott.meili...@craneaerospace.com wrote: Has it been running long? Initially the numbers are way off. After

Re: [zfs-discuss] Fwd: Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
, Scott Meilicke scott.meili...@craneaerospace.com wrote: OK, let me see if I have this right: 8 shelves, 1T disks, 24 disks per shelf = 192 disks 8 shelves, 2T disks, 24 disks per shelf = 192 disks Each raidz is six disks. 64 raidz vdevs Each iSCSI target is made up of 8 of these raidz vdevs (8

[zfs-discuss] Resliver making the system unresponsive

2010-09-29 Thread Scott Meilicke
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I

Re: [zfs-discuss] Resliver making the system unresponsive

2010-09-29 Thread Scott Meilicke
I should add I have 477 snapshots across all files systems. Most of them are hourly snaps (225 of them anyway). On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote: This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS

Re: [zfs-discuss] When Zpool has no space left and no snapshots

2010-09-28 Thread Scott Meilicke
://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace Electronics | +1 425-743-8153 | M: +1 206-406-2670 We value your opinion! How may we serve you better

Re: [zfs-discuss] Kernel panic on ZFS import - how do I recover?

2010-09-27 Thread Scott Meilicke
I just realized that the email I sent to David and the list did not make the list (at least as jive can see it), so here is what I sent on the 23rd: Brilliant. I set those parameters via /etc/system, rebooted, and the pool imported with just the –f switch. I had seen this as an option earlier,

[zfs-discuss] My filesystem turned from a directory into a special character device

2010-09-27 Thread Scott Meilicke
I am running nexenta CE 3.0.3. I have a file system that at some point in the last week went from a directory per 'ls -l' to a special character device. This results in not being able to get into the file system. Here is my file system, scott2, along with a new file system I just created,

Re: [zfs-discuss] My filesystem turned from a directory into a special character device

2010-09-27 Thread Scott Meilicke
On 9/27/10 9:56 AM, Victor Latushkin victor.latush...@oracle.com wrote: On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote: I am running nexenta CE 3.0.3. I have a file system that at some point in the last week went from a directory per 'ls -l' to a special character device

Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Scott Meilicke
When I do the calculations, assuming 300bytes per block to be conservative, with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped data. But block size is dynamic, so you will need more than this. Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread Scott Meilicke
Can I disable dedup on the dataset while the transfer is going on? Yes. Only the blocks copied after disabling dedupe will not be deduped. The stuff you have already copied will be deduped. Can I simply Ctrl-C the procress to stop it? Yes, you can do that to a mv process. Maybe stop the

Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-23 Thread Scott Meilicke
Hi Peter, dedupe is pool wide. File systems can opt in or out of dedupe. So if multiple file systems are set to dedupe, then they all benefit from using the same pool of deduped blocks. In this way, if two files share some of the same blocks, even if they are in different file systems, they

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-07 Thread Scott Meilicke
Craig, 3. I do not think you will get much dedupe on video, music and photos. I would not bother. If you really wanted to know at some later stage, you could create a new file system, enable dedupe, and copy your data (or a subset) into it just to see. In my experience there is a significant

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-16 Thread Scott Meilicke
I had already begun the process of migrating my 134 boxes over to Nexenta before Oracle's cunning plans became known. This just reaffirms my decision. Us too. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] snapshot space - miscalculation?

2010-08-04 Thread Scott Meilicke
Are there other file systems underneath daten/backups that have snapshots? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Scott Meilicke
Another data point - I used three 15K disks striped using my RAID controller as a slog for the zil, and performance went down. I had three raidz sata vdevs holding the data, and my load was VMs, i.e. a fair amount of small, random IO (60% random, 50% write, ~16k in size). Scott -- This

Re: [zfs-discuss] Deleting large amounts of files

2010-07-19 Thread Scott Meilicke
If these files are deduped, and there is not a lot of RAM on the machine, it can take a long, long time to work through the dedupe portion. I don't know enough to know if that is what you are experiencing, but it could be the problem. How much RAM do you have? Scott -- This message posted

Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-23 Thread Scott Meilicke
Look again at how XenServer does storage. I think you will find it already has a solution, both for iSCSI and NFS. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Scott Meilicke
Reaching into the dusty regions of my brain, I seem to recall that since RAIDz does not work like a traditional RAID 5, particularly because of variably sized stripes, that the data may not hit all of the disks, but it will always be redundant. I apologize for not having a reference for this

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-15 Thread Scott Meilicke
Price? I cannot find it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Scott Meilicke
You might bring over all of your old data and snaps, then clone that into a new volume. Bring your recent stuff into the clone. Since the clone only updates blocks that are different than the underlying snap, you may see a significant storage savings. Two clones could even be made - one for

Re: [zfs-discuss] iScsi slow

2010-05-26 Thread Scott Meilicke
iSCSI writes require a sync to disk for every write. SMB writes get cached in memory, therefore are much faster. I am not sure why it is so slow for reads. Have you tried comstar iSCSI? I have read in these forums that it is faster. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] iSCSI confusion

2010-05-24 Thread Scott Meilicke
VMware will properly handle sharing a single iSCSI volume across multiple ESX hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS for ISCSI ntfs backing store.

2010-04-23 Thread Scott Meilicke
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16 disks were in a SAS cabinet, and the the slog was on the server itself. We are now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet. Since the disks are 1.5T, I was concerned about resliver times

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-23 Thread Scott Meilicke
My use case for opensolaris is as a storage server for a VM environment (we also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer within a VM, simulating my VM IO activity, with some balance given to easy benchmarking. We have about 110 VMs across eight ESX hosts. Here is

Re: [zfs-discuss] ZFS for ISCSI ntfs backing store.

2010-04-16 Thread Scott Meilicke
I have used build 124 in this capacity, although I did zero tuning. I had about 4T of data on a single 5T iSCSI volume over gigabit. The windows server was a VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no l2arc cache device. I used comstar. It was being used

Re: [zfs-discuss] Rethinking my zpool

2010-03-19 Thread Scott Meilicke
You will get much better random IO with mirrors, and better reliability when a disk fails with raidz2. Six sets of mirrors are fine for a pool. From what I have read, a hot spare can be shared across pools. I think the correct term would be load balanced mirrors, vs RAID 10. What kind of

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-19 Thread Scott Meilicke
One of the reasons I am investigating solaris for this is sparse volumes and dedupe could really help here. Currently we use direct attached storage on the dom0s and allocate an LVM to the domU on creation. Just like your example above, we have lots of those 80G to start with please

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
It is hard, as you note, to recommend a box without knowing the load. How many linux boxes are you talking about? I think having a lot of space for your L2ARC is a great idea. Will you mirror your SLOG, or load balance them? I ask because perhaps one will be enough, IO wise. My box has one

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
I was planning to mirror them - mainly in the hope that I could hot swap a new one in the event that an existing one started to degrade. I suppose I could start with one of each and convert to a mirror later although the prospect of losing either disk fills me with dread. You do not need to

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Scott Meilicke
Apple users have different expectations regarding data loss than Solaris and Linux users do. Come on, no Apple user bashing. Not true, not fair. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-16 Thread Scott Meilicke
This is what I used: http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Configure+iSCSI+Target+Ports I distilled that to: disable the old, enable the new (comstar) * sudo svcadm disable iscsitgt * sudo svcadm enable stmf Then four steps (using my zfs/zpool info - substitute for yours):

Re: [zfs-discuss] backup zpool to tape

2010-03-15 Thread Scott Meilicke
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and fast. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What a

2010-03-04 Thread Scott Meilicke
To be clear, you can do what you want with the following items (besides your server): (1) OpenSolaris LiveCD (1) 8GB USB Flash drive As many tapes as you need to store your data pools on. Make sure the USB drive has a saved stream from your rpool. It should also have a downloaded copy of

Re: [zfs-discuss] raidz2 array FAULTED with only 1 drive down

2010-02-25 Thread Scott Meilicke
You might have to force the import with -f. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD and ZFS

2010-02-12 Thread Scott Meilicke
I don't think adding an SSD mirror to an existing pool will do much for performance. Some of your data will surely go to those SSDs, but I don't think the solaris will know they are SSDs and move blocks in and out according to usage patterns to give you an all around boost. They will just be

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
Thanks Dan. When I try the clone then import: pfexec zfs clone data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00:00 data01/san/gallardo/g-testandlab pfexec sbdadm import-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab The sbdadm import-lu gives me: sbdadm: guid in use which

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
Sure, but that will put me back into the original situation. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
That is likely it. I create the volume using 2009.06, then later upgraded to 124. I just now created a new zvol, connected it to my windows server, formatted, and added some data. Then I snapped the zvol, cloned the snap, and used 'pfexec sbdadm create-lu'. When presented to the windows server,

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
I plan on filing a support request with Sun, and will try to post back with any results. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-04 Thread Scott Meilicke
I have a single zfs volume, shared out using COMSTAR and connected to a Windows VM. I am taking snapshots of the volume regularly. I now want to mount a previous snapshot, but when I go through the process, Windows sees the new volume, but thinks it is blank and wants to initialize it. Any

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-29 Thread Scott Meilicke
Link aggregation can use different algorithms to load balance. Using L4 (IP plus originating port I think), using a single client computer and the same protocol (NFS), but different origination ports has allowed me to saturate both NICS in my LAG. So yes, you just need more than one

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Scott Meilicke
It looks like there is not a free slot for a hot spare? If that is the case, then it is one more factor to push towards raidz2, as you will need time to remove the failed disk and insert a new one. During that time you don't want to be left unprotected. -- This message posted from

Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Scott Meilicke
I think Y is such a variable and complex number it would be difficult to give a rule of thumb, other than to 'test with your workload'. My server, having three, five disk raidzs (striped) and an intel x25-e as a zil can fill my two G ethernet pipes over NFS (~200MBps) during mostly sequential

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Scott Meilicke
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all data. I have not had a failure, however when migrating EqualLogic arrays in and out of pools, I lost a disk on an array. No data loss, but it concerns me because during the moves, you are essentially reading and writing

Re: [zfs-discuss] Using iSCSI on ZFS with non-native FS - How to backup.

2009-12-07 Thread Scott Meilicke
It does 'just work', however you may have some file and/or file system corruption if the snapshot was taken at the moment that your mac is updating some files. So use the time slider function and take a lot of snaps. :) -- This message posted from opensolaris.org

Re: [zfs-discuss] mirroring ZIL device

2009-11-23 Thread Scott Meilicke
# 1. It may help to use 15k disks as the zil. When I tested using three 15k disks striped as my zil, it made my workload go slower, even though it seems like it should have been faster. My suggestion is to test it out, and see if it helps. #3. You may get good performance with an inexpensive

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Scott Meilicke
If the 7310s can meet your performance expectations, they sound much better than a pair of x4540s. Auto-fail over, SSD performance (although these can be added to the 4540s), ease of management, and a great front end. I haven't seen if you can use your backup software with the 7310s, but from

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-18 Thread Scott Meilicke
I second the use of zilstat - very useful, especially if you don't want to mess around with adding a log device and then having to destroy the pool if you don't want the log device any longer. On Nov 18, 2009, at 2:20 AM, Dushyanth wrote: Just to clarify : Does iSCSI traffic from a Solaris

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Scott Meilicke
I am sorry that I don't have any links, but here is what I observe on my system. dd does not do sync writes, so the ZIL is not used. iSCSI traffic does sync writes (as of 2009.06, but not 2008.05), so if you repeat your test using an iSCSI target from your system, you should see log activity.

Re: [zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-30 Thread Scott Meilicke
Excellent! That worked just fine. Thank you Victor. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-29 Thread Scott Meilicke
Hi all, I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did: I created a file to use as a backing store for my new pool:

Re: [zfs-discuss] File level cloning

2009-10-28 Thread Scott Meilicke
I don't think so. But, you can clone at the ZFS level, and then just use the vmdk(s) that you need. As long as you don't muck about with the other stuff in the clone, the space usage should be the same. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-28 Thread Scott Meilicke
Hi Jeremy, I had a loosely similar problem with my 2009.06 box. In my case (which may not be yours), working with support we found a bug that was causing my pool to hang. I also got erroneous errors when I did a scrub ( 3 x 5 disk raidz). I am using the same LSI controller. A sure fire way to

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
sigh Thanks Frédéric, that is a very interesting read. So my options as I see them now: 1. Keep the x25-e, and disable the cache. Performance should still be improved, but not by a *whole* like, right? I will google for an expectation, but if anyone knows off the top of their head, I would

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
Ed, your comment: If solaris is able to install at all, I would have to acknowledge, I have to shutdown anytime I need to change the Perc configuration, including replacing failed disks. Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a

[zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-20 Thread Scott Meilicke
I have an Intel X25-E 32G in the mail (actually the kingston version), and wanted to get a sanity check before I start. System: Dell 2950 16G RAM 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive slots, a single zpool. svn_124, but with my zpool still running at

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Scott Meilicke
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a big help. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] poor man's Drobo on FreeNAS

2009-09-30 Thread Scott Meilicke
Requires a login... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
How can I verify if the ZIL has been disabled or not? I am trying to see how much benefit I might get by using an SSD as a ZIL. I disabled the ZIL via the ZFS Evil Tuning Guide: echo zil_disable/W0t1 | mdb -kw and then rebooted. However, I do not see any benefits for my NFS workload. Thanks,

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
Thank you both, much appreciated. I ended up having to put the flag into /etc/system. When I disabled the ZIL and umount/mounted without a reboot, my ESX host would not see the NFS export, nor could I create a new NFS connection from my ESX host. I could get into the file system from the host

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
zfs share -a Ah-ha! Thanks. FYI, I got between 2.5x and 10x improvement in performance, depending on the test. So tempting :) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Scott Meilicke
I think in theory the ZIL/L2ARC should make things nice and fast if your workload includes sync requests (database, iscsi, nfs, etc.), regardless of the backend disks. But the only sure way to know is test with your work load. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-08 Thread Scott Meilicke
True, this setup is not designed for high random I/O, but rather lots of storage with fair performance. This box is for our dev/test backend storage. Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test) on average, so for our development VI, we are expecting half of

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Roch Bourbonnais Wrote: 100% random writes produce around 200 IOPS with a 4-6 second pause around every 10 seconds. This indicates that the bandwidth you're able to transfer through the protocol is about 50% greater than the bandwidth the pool can offer to ZFS. Since, this is is not sustainable,

Re: [zfs-discuss] Understanding when (and how) ZFS will use spare disks

2009-09-04 Thread Scott Meilicke
This sounds like the same behavior as opensolaris 2009.06. I had several disks recently go UNAVAIL, and the spares did not take over. But as soon as I physically removed a disk, the spare started replacing the removed disk. It seems UNAVAIL is not the same as the disk not being there. I wish

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
So what happens during the txg commit? For example, if the ZIL is a separate device, SSD for this example, does it not work like: 1. A sync operation commits the data to the SSD 2. A txg commit happens, and the data from the SSD are written to the spinning disk So this is two writes, correct?

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Doh! I knew that, but then forgot... So, for the case of no separate device for the ZIL, the ZIL lives on the disk pool. In which case, the data are written to the pool twice during a sync: 1. To the ZIL (on disk) 2. From RAM to disk during tgx If this is correct (and my history in this

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
So, I just re-read the thread, and you can forget my last post. I had thought the argument was that the data were not being written to disk twice (assuming no separate device for the ZIL), but it was just explaining to me that the data are not read from the ZIL to disk, but rather from memory

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying. Sync or async, I see the txg flushing to disk starve read IO. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
I only see the blocking while load testing, not during regular usage, so I am not so worried. I will try the kernel settings to see if that helps if/when I see the issue in production. For what it is worth, here is the pattern I see when load testing NFS (iometer, 60% random, 65% read, 8k

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Scott Meilicke
You are completely off your rocker :) No, just kidding. Assuming the virtual front-end servers are running on different hosts, and you are doing some sort of raid, you should be fine. Performance may be poor due to the inexpensive targets on the back end, but you probably know that. A while

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically migrate to the other disks. You will have to rewrite the data somehow, usually a backup/restore. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Scott Meilicke
Roman, are you saying you want to install OpenSolaris on your old servers, or make the servers look like an external JBOD array, that another server will then connect to? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Meilicke
You can try: zpool iostat pool_name -v 1 This will show you IO on each vdev at one second intervals. Perhaps you will see different IO behavior on any suspect drive. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-08-12 Thread Scott Meilicke
Yes! That would be icing on the cake. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2009-08-12 Thread Scott Meilicke
My EqualLogic arrays do not disconnect when resizing volumes. When I need to resize, on the Windows side I open the iSCSI control panel, and get ready to click the 'logon' button. I then resize the volume on the OpenSolaris box, and immediately after that is complete, on the Windows side,

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-07 Thread Scott Meilicke
Note - this has a mini PCIe interface, not PCIe. I had the 64GB version in a Dell Mini 9. While it was great for it's small size, low power and low heat characteristics (no fan on the Mini 9!), it was only faster than the striped sata drives in my mac pro when it came to random reads.

Re: [zfs-discuss] zfs fragmentation

2009-08-07 Thread Scott Meilicke
ZFS absolutely observes synchronous write requests (e.g. by NFS or a database). The synchronous write requests do not benefit from the long write aggregation delay so the result may not be written as ideally as ordinary write requests. Recently zfs has added support for using a SSD as a

Re: [zfs-discuss] Can I setting 'zil_disable' to increase ZFS/iscsi performance ?

2009-08-06 Thread Scott Meilicke
You can use a separate SSD ZIL. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Scott Meilicke
which gap? 'RAID-Z should mind the gap on writes' ? Message was edited by: thometal I believe this is in reference to the raid 5 write hole, described here: http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance RAIDZ should avoid this via it's Copy on Write model:

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
For what it is worth, I too have seen this behavior when load testing our zfs box. I used iometer and the RealLife profile (1 worker, 1 target, 65% reads, 60% random, 8k, 32 IOs in the queue). When writes are being dumped, reads drop close to zero, from 600-700 read IOPS to 15-30 read IOPS.

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
On Tue, 30 Jun 2009, Bob Friesenhahn wrote: Note that this issue does not apply at all to NFS service, database service, or any other usage which does synchronous writes. I see read starvation with NFS. I was using iometer on a Windows VM, connecting to an NFS mount on a 2008.11 physical

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Scott Meilicke
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got nearly identical results to having the disks on iSCSI: iSCSI IOPS: 1003.8 MB/s: 7.8 Avg Latency (s): 27.9 NFS IOPS: 1005.9 MB/s: 7.9 Avg Latency (s): 29.7 Interesting! Here is how the pool was behaving during the

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Scott Meilicke
Hi, When you have a lot of random read/writes, raidz/raidz2 can be fairly slow. http://blogs.sun.com/roch/entry/when_to_and_not_to The recommendation is to break the disks into smaller raidz/z2 stripes, thereby improving IO. From the ZFS Best Practices Guide:

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
if those servers are on physical boxes right now i'd do some perfmon caps and add up the iops. Using perfmon to get a sense of what is required is a good idea. Use the 95 percentile to be conservative. The counters I have used are in the Physical disk object. Don't ignore the latency counters

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
Isn't that section of the evil tuning guide you're quoting actually about checking if the NVRAM/driver connection is working right or not? Miles, yes, you are correct. I just thought it was interesting reading about how syncs and such work within ZFS. Regarding my NFS test, you remind me that

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-24 Thread Scott Meilicke
See this thread for information on load testing for vmware: http://communities.vmware.com/thread/73745?tstart=0start=0 Within the thread there are instructions for using iometer to load test your storage. You should test out your solution before going live, and compare what you get with what

Re: [zfs-discuss] SAN server

2009-06-23 Thread Scott Meilicke
For ~100 people, I like Bob's answer. RAID 10 will get you lots of speed. Perhaps RAID50 would be just fine for you as well and give your more space, but without measuring, you won't be sure. Don't forget a hot spare (or two)! Your MySQL database - will that generate a lot of IO? Also, to

Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!

2009-06-19 Thread Scott Meilicke
So how are folks getting around the NFS speed hit? Using SSD or battery backed RAM ZILs? Regarding limited NFS mounts, underneath a single NFS mount, would it work to: * Create a new VM * Remove the VM from inventory * Create a new ZFS file system underneath the original * Copy the VM to that

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O th

2009-06-19 Thread Scott Meilicke
Generally, yes. Test it with your workload and see how it works out for you. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Scott Meilicke
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS. Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL), iSCSI using vmware ESXi 3.5 software initiator. Scott -- This

Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Scott Meilicke
The SATA drive will be your bottleneck, and you will lose any speed advantages of the SAS drives, especially using 3 vdevs on a single SATA disk. I am with Richard, figure out what performance you need, and build accordingly. -- This message posted from opensolaris.org

  1   2   >