Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-17 Thread Ross Walker
On Jun 17, 2011, at 7:06 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: I will only say, that regardless of whether or not that is or ever was true, I believe it's entirely irrelevant. Because your system performs read and write caching and buffering in ram,

Re: [zfs-discuss] question about COW and snapshots

2011-06-17 Thread Ross Walker
On Jun 16, 2011, at 7:23 PM, Erik Trimble erik.trim...@oracle.com wrote: On 6/16/2011 1:32 PM, Paul Kraus wrote: On Thu, Jun 16, 2011 at 4:20 PM, Richard Elling richard.ell...@gmail.com wrote: You can run OpenVMS :-) Since *you* brought it up (I was not going to :-), how does VMS'

Re: [zfs-discuss] dual protocal on one file system?

2011-03-16 Thread Ross Walker
On Mar 16, 2011, at 8:13 AM, Paul Kraus p...@kraus-haus.org wrote: On Tue, Mar 15, 2011 at 11:00 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: BTW, what is the advantage of the kernel cifs server as opposed to samba? It seems, years ago, somebody must have

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-25 Thread Ross Walker
On Dec 24, 2010, at 1:21 PM, Richard Elling richard.ell...@gmail.com wrote: Latency is what matters most. While there is a loose relationship between IOPS and latency, you really want low latency. For 15krpm drives, the average latency is 2ms for zero seeks. A decent SSD will beat that

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-15 Thread Ross Walker
On Dec 15, 2010, at 6:48 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Wed, 15 Dec 2010, Linder, Doug wrote: But it sure would be nice if they spared everyone a lot of effort and annoyance and just GPL'd ZFS. I think the goodwill generated Why do you want them to GPL ZFS?

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-08 Thread Ross Walker
On Dec 7, 2010, at 9:49 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: From: Ross Walker [mailto:rswwal...@gmail.com] Well besides databases there are VM datastores, busy email servers, busy ldap servers, busy web servers, and I'm sure the list goes

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-12-08 Thread Ross Walker
On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: For anyone who cares: I created an ESXi machine. Installed two guest (centos) machines and vmware-tools. Connected them to each other via only a virtual switch. Used rsh to transfer

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Ross Walker
On Dec 7, 2010, at 12:46 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: Bear a few things in mind: iops is not iops. snip/ I am totally aware of these differences, but it seems some people think RAIDz is nonsense unless you don't need speed at all. My testing shows (so far) that

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-17 Thread Ross Walker
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen pa...@iki.fi wrote: On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:    Hi all,    Let me tell you all that the MC/S *does* make a difference...I had a    windows fileserver using an ISCSI connection to a host running snv_134    

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote: On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin car...@ivy.net wrote: tc == Tim Cook t...@cook.ms writes: tc Channeling Ethernet will not make it any faster. Each tc individual connection will be limited to 1gbit. iSCSI

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
On Nov 16, 2010, at 7:49 PM, Jim Dunham james.dun...@oracle.com wrote: On Nov 16, 2010, at 6:37 PM, Ross Walker wrote: On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote: AFAIK, esx/i doesn't support L4 hash, so that's a non-starter. For iSCSI one just needs to have a second (third

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-11-01 Thread Ross Walker
On Nov 1, 2010, at 5:09 PM, Ian D rewar...@hotmail.com wrote: Maybe you are experiencing this: http://opensolaris.org/jive/thread.jspa?threadID=11942 It does look like this... Is this really the expected behaviour? That's just unacceptable. It is so bad it sometimes drop connection and

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-01 Thread Ross Walker
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com wrote: Hello, I'm working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub:

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Ross Walker
On Oct 19, 2010, at 4:33 PM, Tuomas Leikola tuomas.leik...@gmail.com wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe

Re: [zfs-discuss] Finding corrupted files

2010-10-15 Thread Ross Walker
On Oct 15, 2010, at 9:18 AM, Stephan Budach stephan.bud...@jvm.de wrote: Am 14.10.10 17:48, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Toby Thain I don't want to heat up the discussion about ZFS managed

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-15 Thread Ross Walker
On Oct 15, 2010, at 5:34 PM, Ian D rewar...@hotmail.com wrote: Has anyone suggested either removing L2ARC/SLOG entirely or relocating them so that all devices are coming off the same controller? You've swapped the external controller but the H700 with the internal drives could be the real

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Ross Walker
On Oct 12, 2010, at 8:21 AM, Edward Ned Harvey sh...@nedharvey.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach c3t211378AC0253d0 ONLINE 0 0 0 How many disks are there inside of

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-09 Thread Ross Walker
On Sep 9, 2010, at 8:27 AM, Fei Xu twinse...@hotmail.com wrote: Service times here are crap. Disks are malfunctioning in some way. If your source disks can take seconds (or 10+ seconds) to reply, then of course your copy will be slow. Disk is probably having a hard time reading the data

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ross Walker
On Aug 27, 2010, at 1:04 AM, Mark markwo...@yahoo.com wrote: We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I installed I selected the best bang for the buck on the speed vs capacity chart. We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all

[zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
I'm planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld bill.sommerf...@oracle.com wrote: On 08/21/10 10:14, Ross Walker wrote: I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. (I have no specific experience

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 4:40 PM, Richard Elling rich...@nexenta.com wrote: On Aug 21, 2010, at 10:14 AM, Ross Walker wrote: I'm planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. Please follow the joint EMC

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ross Walker
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Wed, 18 Aug 2010, Joerg Schilling wrote: Linus is right with his primary decision, but this also applies for static linking. See Lawrence Rosen for more information, the GPL does not distinct between

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Ross Walker
On Aug 16, 2010, at 11:17 PM, Frank Cusack frank+lists/z...@linetwo.net wrote: On 8/16/10 9:57 AM -0400 Ross Walker wrote: No, the only real issue is the license and I highly doubt Oracle will re-release ZFS under GPL to dilute it's competitive advantage. You're saying Oracle wants to keep

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 16, 2010, at 9:06 AM, Edward Ned Harvey sh...@nedharvey.com wrote: ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS, and CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS steps on a lot of linux peoples' toes. They already have code to do

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 15, 2010, at 9:44 PM, Peter Jeremy peter.jer...@alcatel-lucent.com wrote: Given that both provide similar features, it's difficult to see why Oracle would continue to invest in both. Given that ZFS is the more mature product, it would seem more logical to transfer all the effort to

Re: [zfs-discuss] ZFS and VMware

2010-08-14 Thread Ross Walker
On Aug 14, 2010, at 8:26 AM, Edward Ned Harvey sh...@nedharvey.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey #3 I previously believed that vmfs3 was able to handle sparse files amazingly well, like, when

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 11:10 AM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote: Ross Asks: So on that note, ZFS should

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: Le 5 août 2010 à 19:49, Ross Walker a écrit : On Aug 5, 2010, at 11:10 AM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote: Ross

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 3:52 AM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: Le 27 mai 2010 à 07:03, Brent Jones a écrit : On Wed, May 26, 2010 at 5:08 AM, Matt Connolly matt.connolly

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote: Ross Asks: So on that note, ZFS should disable the disks' write cache, not enable them despite ZFS's COW properties because it should be resilient. No, because ZFS builds resiliency on top of unreliable parts. it's

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote: Ross Walker writes: On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote: Ross Asks: So on that note, ZFS should disable the disks' write cache, not enable them despite ZFS's COW properties because

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski mi...@task.gda.pl wrote: On 03/08/2010 22:49, Ross Walker wrote: On Aug 3, 2010, at 12:13 PM, Roch Bourbonnaisroch.bourbonn...@sun.com wrote: Le 27 mai 2010 à 07:03, Brent Jones a écrit : On Wed, May 26, 2010 at 5:08 AM, Matt

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski mi...@task.gda.pl wrote: On 03/08/2010 22:49, Ross Walker wrote: On Aug 3, 2010, at 12:13 PM, Roch Bourbonnaisroch.bourbonn...@sun.com wrote: Le 27 mai 2010 à 07:03, Brent Jones a écrit : On Wed, May 26, 2010 at 5:08 AM, Matt

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Ross Walker
On Jul 26, 2010, at 2:51 PM, Dav Banks davba...@virginia.edu wrote: I wanted to test it as a backup solution. Maybe that's crazy in itself but I want to try it. Basically, once a week detach the 'backup' pool from the mirror, replace the drives, add the new raidz to the mirror and let it

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-25 Thread Ross Walker
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey sh...@nedharvey.com wrote: From: Arne Jansen [mailto:sensi...@gmx.net] Can anyone else confirm or deny the correctness of this statement? As I understand it that's the whole point of raidz. Each block is its own stripe. Nope, that

Re: [zfs-discuss] File cloning

2010-07-22 Thread Ross Walker
On Jul 22, 2010, at 2:41 PM, Miles Nordin car...@ivy.net wrote: sw == Saxon, Will will.sa...@sage.com writes: sw 'clone' vs. a 'copy' would be very easy since we have sw deduplication now dedup doesn't replace the snapshot/clone feature for the NFS-share-full-of-vmdk use case

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Ross Walker
On Jul 20, 2010, at 6:12 AM, v victor_zh...@hotmail.com wrote: Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk's ipos. On

Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Ross Walker
The whole disk layout should be copied from disk 1 to 2, then the slice on disk 2 that corresponds to the slice on disk 1 should be attached to the rpool which forms an rpool mirror (attached not added). Then you need to add the grub bootloader to disk 2. When it finishes resilvering then you

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Ross Walker
On Jul 11, 2010, at 5:11 PM, Freddie Cash fjwc...@gmail.com wrote: ZFS-FUSE is horribly unstable, although that's more an indication of the stability of the storage stack on Linux. Not really, more an indication of the pseudo-VFS layer implemented in fuse. Remember fuse provides it's own VFS

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-10 Thread Ross Walker
On Jul 10, 2010, at 5:46 AM, Erik Trimble erik.trim...@oracle.com wrote: On 7/10/2010 1:14 AM, Graham McArdle wrote: Instead, create Single Disk arrays for each disk. I have a question related to this but with a different controller: If I'm using a RAID controller to provide non-RAID

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 5:40 AM, Robert Milkowski mi...@task.gda.pl wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 10:42 AM, Robert Milkowski mi...@task.gda.pl wrote: On 24/06/2010 14:32, Ross Walker wrote: On Jun 24, 2010, at 5:40 AM, Robert Milkowskimi...@task.gda.pl wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Ross Walker
On Jun 23, 2010, at 1:48 PM, Robert Milkowski mi...@task.gda.pl wrote: 128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one (z1) or two (z2)

Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Ross Walker
On Jun 22, 2010, at 8:40 AM, Jeff Bacon ba...@walleyesoftware.com wrote: The term 'stripe' has been so outrageously severely abused in this forum that it is impossible to know what someone is talking about when they use the term. Seemingly intelligent people continue to use wrong terminology

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Ross Walker
On Jun 16, 2010, at 9:02 AM, Carlos Varela carlos.var...@cibc.ca wrote: Does the machine respond to ping? Yes If there is a gui does the mouse pointer move? There is no GUI (nexentastor) Does the keyboard numlock key respond at all ? Yes I just find it very hard to believe

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-14 Thread Ross Walker
On Jun 13, 2010, at 2:14 PM, Jan Hellevik opensola...@janhellevik.com wrote: Well, for me it was a cure. Nothing else I tried got the pool back. As far as I can tell, the way to get it back should be to use symlinks to the fdisk partitions on my SSD, but that did not work for me. Using

Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Ross Walker
On Jun 11, 2010, at 2:07 AM, Dave Koelmeyer davekoelme...@me.com wrote: I trimmed, and then got complained at by a mailing list user that the context of what I was replying to was missing. Can't win :P If at a minimum one trims the disclaimers, footers and signatures, that's better then

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Ross Walker
On Jun 10, 2010, at 5:54 PM, Richard Elling richard.ell...@gmail.com wrote: On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote: Andrey Kuzmin wrote: Well, I'm more accustomed to sequential vs. random, but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Ross Walker
On Jun 8, 2010, at 1:33 PM, besson3c j...@netmusician.org wrote: Sure! The pool consists of 6 SATA drives configured as RAID-Z. There are no special read or write cache drives. This pool is shared to several VMs via NFS, these VMs manage email, web, and a Quickbooks server running on

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ross Walker
On Jun 7, 2010, at 2:10 AM, Erik Trimble erik.trim...@oracle.com wrote: Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. On Sun, Jun 6, 2010 at 19:40, Erik Trimble erik.trim...@oracle.com wrote: On 6/6/2010 6:22 PM, Ken

Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Ross Walker
On Jun 2, 2010, at 12:03 PM, zfsnoob4 zfsnoob...@hotmail.co.uk wrote: Wow thank you very much for the clear instructions. And Yes, I have another 120GB drive for the OS, separate from A, B and C. I will repartition the drive and install Solaris. Then maybe at some point I'll delete the

Re: [zfs-discuss] New SSD options

2010-05-21 Thread Ross Walker
On May 20, 2010, at 7:17 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 21 maj 2010, at 00.53, Ross Walker wrote: On May 20, 2010, at 6:25 PM, Travis Tabbal tra...@tabbal.net wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ross Walker
On May 20, 2010, at 6:25 PM, Travis Tabbal tra...@tabbal.net wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems to be one SSD that is considered working, the Zeus IOPS. Even if I had the money, I can't

Re: [zfs-discuss] ZFS High Availability

2010-05-13 Thread Ross Walker
On May 12, 2010, at 7:12 PM, Richard Elling richard.ell...@gmail.com wrote: On May 11, 2010, at 10:17 PM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 1:17 AM, schickb schi...@gmail.com wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is imported during a failover. The

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 3:06 PM, Manoj Joseph manoj.p.jos...@oracle.com wrote: Ross Walker wrote: On May 12, 2010, at 1:17 AM, schickb schi...@gmail.com wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ross Walker
On May 6, 2010, at 8:34 AM, Edward Ned Harvey solar...@nedharvey.com wrote: From: Pasi Kärkkäinen [mailto:pa...@iki.fi] In neither case do you have data or filesystem corruption. ZFS probably is still OK, since it's designed to handle this (?), but the data can't be OK if you lose 30

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-23 Thread Ross Walker
On Apr 22, 2010, at 11:03 AM, Geoff Nordli geo...@grokworx.com wrote: From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thursday, April 22, 2010 6:34 AM On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com wrote: If you combine the hypervisor and storage server and have

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Ross Walker
On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com wrote: From: matthew patton [mailto:patto...@yahoo.com] Sent: Tuesday, April 20, 2010 12:54 PM Geoff Nordli geo...@grokworx.com wrote: With our particular use case we are going to do a save state on their virtual machines, which

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-21 Thread Ross Walker
On Apr 20, 2010, at 12:13 AM, Sunil funt...@yahoo.com wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought:

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
On Apr 19, 2010, at 12:50 PM, Don d...@blacksun.org wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Ross Walker
On Fri, Apr 2, 2010 at 8:03 AM, Edward Ned Harvey solar...@nedharvey.com wrote: Seriously, all disks configured WriteThrough (spindle and SSD disks alike) using the dedicated ZIL SSD device, very noticeably faster than enabling the WriteBack. What do you get with both SSD ZIL and

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey solar...@nedharvey.com wrote: A MegaRAID card with write-back cache? It should also be cheaper than the F20. I haven't posted results yet, but I just finished a few weeks of extensive benchmarking various configurations. I can say this:

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey solar...@nedharvey.com wrote: We ran into something similar with these drives in an X4170 that turned out to be an issue of the preconfigured logical volumes on the drives. Once we made sure all of our Sun PCI HBAs where running the exact

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote: Is that what sync means in Linux? A sync write is one in which the application blocks until the OS acks that the write has been committed to disk. An async write is given to the OS, and the OS is permitted to buffer the write to

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat darr...@opensolaris.org wrote: On 01/04/2010 14:49, Ross Walker wrote: We're talking about the sync for NFS exports in Linux; what do they mean with sync NFS exports? See section A1 in the FAQ: http://nfs.sourceforge.net/ I think B4

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 5:39 AM, Robert Milkowski mi...@task.gda.pl wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don't think you'll find the performance you paid for with ZFS and Solaris at this time. I've been

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 10:25 PM, Richard Elling richard.ell...@gmail.com wrote: On Mar 31, 2010, at 7:11 PM, Ross Walker wrote: On Mar 31, 2010, at 5:39 AM, Robert Milkowski mi...@task.gda.pl wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 10:18 AM, vikkr psi...@gmail.com wrote: Hi sorry for bad eng and picture :). Can such a decision? 3 servers openfiler give their drives 2 - 1 tb ISCSI server to OpenSolaris On OpenSolaris assembled a RAID-Z with double parity. Server OpenSolaris provides NFS access to

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 11:48 AM, vikkr psi...@gmail.com wrote: THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-17 Thread Ross Walker
On Mar 17, 2010, at 2:30 AM, Erik Ableson eable...@mac.com wrote: On 17 mars 2010, at 00:25, Svein Skogen sv...@stillbilde.net wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 16.03.2010 22:31, erik.ableson wrote: On 16 mars 2010, at 21:00, Marc Nicholas wrote: On Tue, Mar 16,

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon gbul...@sonicle.com wrote: Hello, I'd like to check for any guidance about using zfs on iscsi storage appliances. Recently I had an unlucky situation with an unlucky storage machine freezing. Once the storage was up again (rebooted) all other

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 12:19 PM, Ware Adams rwali...@washdcmail.com wrote: On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote: Well, I actually don't know what implementation is inside this legacy machine. This machine is an AMI StoreTrends ITX, but maybe it has been built around IET,

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 7:11 PM, Tonmaus sequoiamo...@gmx.net wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this single iscsi target. ZFS best practices, tell me that to be safe in case of corruption,

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 11:10 PM, Tim Cook t...@cook.ms wrote: On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker rswwal...@gmail.com wrote: On Mar 15, 2010, at 7:11 PM, Tonmaus sequoiamo...@gmx.net wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host

Re: [zfs-discuss] ZFS - VMware ESX -- vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 8:27 AM, Andrew acmcomput...@hotmail.com wrote: Ok, The fault appears to have occurred regardless of the attempts to move to vSphere as we've now moved the host back to ESX 3.5 from whence it came and the problem still exists. Looks to me like the fault occurred as a

Re: [zfs-discuss] ZFS - VMware ESX -- vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 12:31 PM, Andrew acmcomput...@hotmail.com wrote: Hi Ross, Ok - as a Solaris newbie.. i'm going to need your help. Format produces the following:- c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) / p...@0,0/pci15ad,1...@10/s...@4,0 what dd command do I

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 8, 2010, at 11:46 PM, ольга крыжановская olga.kryzh anov...@gmail.com wrote: tmpfs lacks features like quota and NFSv4 ACL support. May not be the best choice if such features are required. True, but if the OP is looking for those features they are more then unlikely looking for an

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table would probably turn over if you go to 16K zfs records and 16K reads/writes form the application. Next

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Ross Walker
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni gtirl...@sysdroid.com wrote: On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto jacob.rito...@gmail.com wrote: It's a kind gesture to say it'll continue to exist and all, but without commercial support from the manufacturer, it's relegated to

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ross Walker
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-10 Thread Ross Walker
On Feb 9, 2010, at 1:55 PM, matthew patton patto...@yahoo.com wrote: The cheapest solution out there that isn't a Supermicro-like server chassis, is DAS in the form of HP or Dell MD-series which top out at 15 or 16 3 drives. I can only chain 3 units per SAS port off a HBA in either case.

Re: [zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-09 Thread Ross Walker
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey macenterpr...@nedharvey.com wrote: How are you managing UID's on the NFS server? If user eharvey connects to server from client Mac A, or Mac B, or Windows 1, or Windows 2, or any of the linux machines ... the server has to know it's eharvey,

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Ross Walker
On Feb 5, 2010, at 10:49 AM, Robert Milkowski mi...@task.gda.pl wrote: Actually, there is. One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact that in RAID-10 the

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
this manually, using basic file system functions offered by OS. I scan every byte in every file manually and it ^^^ On February 3, 2010 10:11:01 AM -0500 Ross Walker rswwal...@gmail.com wrote: Not a ZFS method, but you could use rsync

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 9:53 AM, Henu henrik.he...@tut.fi wrote: Okay, so first of all, it's true that send is always fast and 100% reliable because it uses blocks to see differences. Good, and thanks for this information. If everything else fails, I can parse the information I want from send

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 12:35 PM, Frank Cusack frank+lists/ z...@linetwo.net wrote: On February 3, 2010 12:19:50 PM -0500 Frank Cusack frank+lists/z...@linetwo.net wrote: If you do need to know about deleted files, the find method still may be faster depending on how ddiff determines whether or

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 8:59 PM, Frank Cusack frank+lists/z...@linetwo.net wrote: On February 3, 2010 6:46:57 PM -0500 Ross Walker rswwal...@gmail.com wrote: So was there a final consensus on the best way to find the difference between two snapshots (files/directories added, files/directories

Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-30 Thread Ross Walker
On Jan 30, 2010, at 2:53 PM, Mark white...@gmail.com wrote: I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Ross Walker
On Jan 21, 2010, at 6:47 PM, Daniel Carosone d...@geek.com.au wrote: On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote: + support file systems larger then 2GiB include 32-bit UIDs a GIDs file systems, but what about individual files within? I think the original author meant

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Ross Walker
On Jan 14, 2010, at 10:44 AM, Mr. T Doodle tpsdoo...@gmail.com wrote: Hello, I have played with ZFS but not deployed any production systems using ZFS and would like some opinions I have a T-series box with 4 internal drives and would like to deploy ZFS with availability and

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Ross Walker
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Mon, 11 Jan 2010, bank kus wrote: Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2010-01-04 Thread Ross Walker
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones br...@servuhome.net wrote: On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker rswwal...@gmail.com wrote: On Dec 30, 2009, at 11:55 PM, Steffen Plotner swplot...@amherst.edu wrote: Hello, I was doing performance testing, validating zvol performance

Re: [zfs-discuss] repost - high read iops

2009-12-30 Thread Ross Walker
On Wed, Dec 30, 2009 at 12:35 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 29 Dec 2009, Ross Walker wrote: Some important points to consider are that every write to a raidz vdev must be synchronous.  In other words, the write needs to complete on all the drives

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2009-12-30 Thread Ross Walker
On Dec 30, 2009, at 11:55 PM, Steffen Plotner swplot...@amherst.edu wrote: Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 7:55 AM, Brad bene...@yahoo.com wrote: Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? A mirrored raidz provides redundancy at a steep cost to

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 29 Dec 2009, Ross Walker wrote: A mirrored raidz provides redundancy at a steep cost to performance and might I add a high monetary cost. I am not sure what a mirrored raidz is. I have never heard

Re: [zfs-discuss] Benchmarks results for ZFS + NFS, using SSD's as slog devices (ZIL)

2009-12-25 Thread Ross Walker
On Dec 25, 2009, at 6:01 PM, Jeroen Roodhart j.r.roodh...@uva.nl wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Hi Freddie, list, Option 4 is to re-do your pool, using fewer disks per raidz2 vdev, giving more vdevs to the pool, and thus increasing the IOps for the whole pool.

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Ross Walker
On Dec 21, 2009, at 11:56 PM, Roman Naumenko ro...@naumenko.ca wrote: On Dec 21, 2009, at 4:09 PM, Michael Herf mbh...@gmail.com wrote: Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for

  1   2   >