Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos david.bus...@sun.com wrote: Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: I currently have a 400GB disk that is full of data on a linux system. If I

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Thomas Garner
These are the same as the acard devices we've discussed here previously; earlier hyperdrive models were their own design.  Very interesting, and my personal favourite, but I don't know of anyone actually reporting results yet with them as ZIL. Here's one report:

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-17 Thread Thomas Garner
Are you looking for something like: kstat -c disk sd::: Someone can correct me if I'm wrong, but I think the documentation for the above should be at: http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/avs/ns/sdbc/cache_kstats_readme.txt I'm not sure about the file i/o

Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-10 Thread Thomas Garner
Are these machines 32-bit by chance? I ran into similar seemingly unexplainable hangs, which Marc correctly diagnosed and have since not reappeared: http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/049994.html Thomas ___ zfs-discuss

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Thomas Garner
For what it's worth I see this as well on 32-bit Xeons, 1GB ram, and dual AOC-SAT2-MV8 (large amounts of io sometimes resulting in lockup requiring a reboot --- though my setup is Nexenta b85). Nothing in the logging, nor loadavg increasing significantly. It could be the regular Marvell driver

[zfs-discuss] Unbalanced write patterns

2008-07-30 Thread Thomas Garner
If I have 2 raidz's, 5x400G and a later added 5x1T, should I expect that streaming writes would go primarily to only 1 of the raidz sets? Or is this some side effect of my non-ideal hardware setup? I thought that adding additional capacity to a pool automatically would then balance writes to both

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Thomas Garner
Thanks, Roch! Much appreciated knowing what the problem is and that a fix is in a forthcoming release. Thomas On 6/25/07, Roch - PAE [EMAIL PROTECTED] wrote: Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-24 Thread Thomas Garner
We have seen this behavior, but it appears to be entirely related to the hardware having the Intel IPMI stuff swallow up the NFS traffic on port 623 directly by the network hardware and never getting. http://blogs.sun.com/shepler/entry/port_623_or_the_mount Unfortunately, this nfs hangs

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-23 Thread Thomas Garner
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd to stop responding after 2 hours of running a bittorrent client over nfs4 from a linux client, causing zfs snapshots to hang and requiring a hard reboot to get the world back in order? Thomas There is no NFS over ZFS issue

[zfs-discuss] pool resilver oddity

2007-04-08 Thread Thomas Garner
Perhaps someone on this mailing list can shed some light onto some odd zfs circumstances I encountered this weekend. I have an array of 5 400GB drives in a raidz, running on Nexenta. One of these drives showed a SMART error (HARDWARE IMPENDING FAILURE GENERAL HARD DRIVE FAILURE [asc=5d,

Re: [zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread Thomas Garner
for what purpose ? Darren's correct, it's a simple case of ease of use. Not show-stopping by any means but would be nice to have. Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] .zfs snapshot directory in all directories

2007-02-25 Thread Thomas Garner
Since I have been unable to find the answer online, I thought I would ask here. Is there a knob to turn to on a zfs filesystem put the .zfs snapshot directory into all of the children directories of the filesystem, like the .snapshot directories of NetApp systems, instead of just the root of the

Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos [EMAIL PROTECTED] wrote: Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: I currently have a 400GB disk that is full of data on a linux system. If I buy

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Thomas Garner
In the same vein... I currently have a 400GB disk that is full of data on a linux system. If I buy 2 more disks and put them into a raid-z'ed zfs under solaris, is there a generally accepted way to build an degraded array with the 2 disks, copy the data to the new filesystem, and then move the