Re: [zfs-discuss] ZFS on Ubuntu

2010-06-28 Thread Joe Little
All true, I just saw too many need ubuntu and zfs and thought to state the obvious in case the patch set for nexenta happen to differ enough to provide a working set. I've had nexenta succeed where opensolaris quarter releases failed and vice versa On Jun 27, 2010, at 9:54 PM, Erik Trimble

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-27 Thread Joe Little
Of course, nexenta os is a build of ubuntu on an opensolaris kernel. On Jun 26, 2010, at 12:27 AM, Freddie Cash fjwc...@gmail.com wrote: On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles merloc...@hotmail.com wrote: What supporting applications are there on Ubuntu for RAIDZ? None. Ubuntu

Re: [zfs-discuss] Extremely bad performance - hw failure?

2009-12-27 Thread Joe Little
I've had this happen to me too. I found some dtrace scripts at the time that showed that the file system was spending too much time finding available 128k blocks or the like as I was near full per each disk, even though combined I still had 140GB left of my 3TB pool. The SPA code I believe it was

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Joe Little
On Thu, Jun 5, 2008 at 9:26 PM, Tim [EMAIL PROTECTED] wrote: On Thu, Jun 5, 2008 at 11:12 PM, Joe Little [EMAIL PROTECTED] wrote: On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote: On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED] wrote: Hey guys, please

Re: [zfs-discuss] SATA controller suggestion

2008-06-05 Thread Joe Little
On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote: On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED] wrote: Hey guys, please excuse me in advance if I say or ask anything stupid :) Anyway, Solaris newbie here. I've built for myself a new file server to use at

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 7:43 AM, Paul Raines [EMAIL PROTECTED] wrote: It seems when a zfs filesystem with reserv/quota is 100% full users can no longer even delete files to fix the situation getting errors like these: $ rm rh.pm6895.medial.V2.tif rm: cannot remove `rh.pm6895.medial.V2.tif':

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 6:30 AM, Jeb Campbell [EMAIL PROTECTED] wrote: Ok, here is where I'm at: My install of OS 2008.05 (snv_86?) will not even come up in single user. The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of the missing log (and I have to import to fix

Re: [zfs-discuss] [osol-help] 1TB ZFS thin provisioned partition prevents Opensolaris from booting.

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 7:07 AM, Hugh Saunders [EMAIL PROTECTED] wrote: On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai [EMAIL PROTECTED] wrote: I think it's right. You'd have to move to a 64 bit kernel. Any reasons to stick to a 32 bit kernel ? My reason would be lack of 64bit

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell [EMAIL PROTECTED] wrote: Meant to add that zpool import -f pool doesn't work b/c of the missing log vdev. All the other disks are there and show up with zpool import, but it won't import. Is there anyway a util could clear the log device vdev

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 8:59 PM, Joe Little [EMAIL PROTECTED] wrote: On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell [EMAIL PROTECTED] wrote: Meant to add that zpool import -f pool doesn't work b/c of the missing log vdev. All the other disks are there and show up with zpool import

[zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
This past weekend, but holiday was ruined due to a log device replacement gone awry. I posted all about it here: http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html In a nutshell, an resilver of a single log device with itself, due to the fact one can't remove a log device

Re: [zfs-discuss] indiana as nfs server: crash due to zfs

2008-05-27 Thread Joe Little
On Mon, May 26, 2008 at 6:10 AM, Gerard Henry [EMAIL PROTECTED] wrote: hello all, i have indiana freshly installed on a sun ultra 20 machine. It only does nfs server. During one night, the kernel had crashed, and i got this messages: May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice]

Re: [zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
against the log device as you can't remove it and the replacement as you see is touchy at best. I know the larger, but general vdev evacuation is ongoing, but if it is simple, log evacuation would make logs useful now instead of waiting. - Eric On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little

Re: [zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
point there were plans to do this as a separate piece of work (since the vdev changes are needed for the general case anyway), but I don't know whether this is still the case. - Eric On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little wrote: This past weekend, but holiday was ruined due

[zfs-discuss] zfs mount i/o error and workarounds

2008-04-17 Thread Joe Little
Hello list, We discovered a failed disk with checksum errors. Took out the disk and resilvered, which reported many errors. A few of my subvolumes to the pool won't mount anymore, with zfs import poolname reporting that cannot mount 'poolname/proj': I/O error Ok, we have a problem. I can

Re: [zfs-discuss] How many ZFS pools is it sensible to use on a single server?

2008-04-12 Thread Joe Little
On Tue, Apr 8, 2008 at 9:55 AM, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM: In our environment, the politically and administratively simplest approach to managing our storage is to give each separate group at least one ZFS pool of their own (into which

[zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
I was playing with a Gigabyte i-RAM card and found out it works great to improve overall performance when there are a lot of writes of small files over NFS to such a ZFS pool. However, I noted a frequent situation in periods of long writes over NFS of small files. Here's a snippet of iostat

Re: [zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
is that this is a ramdisk so to speak, albeit connected via SATA-I to the sil3124. Any way to isolate this further? Anyway to limit i/o timeouts to a drive? this is just two sticks of ram.. ms would be fine :) -- richard Joe Little wrote: I was playing with a Gigabyte i-RAM card and found out it works

Re: [zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
On Nov 26, 2007 7:57 PM, Richard Elling [EMAIL PROTECTED] wrote: Joe Little wrote: On Nov 26, 2007 7:00 PM, Richard Elling [EMAIL PROTECTED] wrote: I would expect such iostat output from a device which can handle only a single queued I/O to the device (eg. IDE driver) and an I/O

Re: [zfs-discuss] raidz DEGRADED state

2007-11-20 Thread Joe Little
On Nov 20, 2007 6:34 AM, MC [EMAIL PROTECTED] wrote: So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? Can someone answer that? Or does the zpool command NOT accommodate the creation of a degraded raidz array? can't started

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-18 Thread Joe Little
On Nov 18, 2007 1:44 PM, Richard Elling [EMAIL PROTECTED] wrote: one more thing... Joe Little wrote: I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-17 Thread Joe Little
On Nov 16, 2007 10:41 PM, Neil Perrin [EMAIL PROTECTED] wrote: Joe Little wrote: On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote: Joe, I don't think adding a slog helped in this case. In fact I believe it made performance worse. Previously the ZIL would be spread out

[zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
keeps the ZIL pegged and prohibiting reads, its all for nought. Is there anyway to tune/configure the ZFS/NFS combination to balance reads/writes to not starve one for the other. Its either feast or famine or so tests have shown. Neil. Joe Little wrote: I have historically noticed that in ZFS

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
On Nov 16, 2007 9:17 PM, Joe Little [EMAIL PROTECTED] wrote: On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote: Joe, I don't think adding a slog helped in this case. In fact I believe it made performance worse. Previously the ZIL would be spread out over all devices but now

Re: [zfs-discuss] first public offering of NexentaStor

2007-11-07 Thread Joe Little
Not for NexentaStor as yet to my knowledge. I'd like to caution that the target of the initial product release is digital archiving/tiering/etc and is not necessarily primary NAS usage, though it can be used as such for those so inclined. However, interested parties should contact them as they

Re: [zfs-discuss] first public offering of NexentaStor

2007-11-02 Thread Joe Little
On 11/2/07, MC [EMAIL PROTECTED] wrote: I consider myself an early adopter of ZFS and pushed it hard on this list and in real life with regards to iSCSI integration, zfs performance issues with latency there of, and how best to use it with NFS. Well, I finally get to talk more about

Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Joe Little
On 11/2/07, Rob Logan [EMAIL PROTECTED] wrote: I'm confused by this and NexentaStor... wouldn't it be better to use b77? with: Heads Up: File system framework changes (supplement to CIFS' head's up) Heads Up: Flag Day (Addendum) (CIFS Service) Heads Up: Flag Day (CIFS Service)

[zfs-discuss] first public offering of NexentaStor

2007-11-01 Thread Joe Little
I consider myself an early adopter of ZFS and pushed it hard on this list and in real life with regards to iSCSI integration, zfs performance issues with latency there of, and how best to use it with NFS. Well, I finally get to talk more about the ZFS-based product I've been beta testing for quite

Re: [zfs-discuss] Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-07 Thread Joe Little
On 6/7/07, Al Hopper [EMAIL PROTECTED] wrote: On Wed, 6 Jun 2007, Erast Benson wrote: Announcing new direction of Open Source NexentaOS development: NexentaCP (Nexenta Core Platform). NexentaCP is Dapper/LTS-based core Operating System Platform distributed as a single-CD ISO, integrates

Re: [zfs-discuss] Re: Re: .zfs snapshot directory in all directories

2007-02-28 Thread Joe Little
On 2/27/07, Eric Haycraft [EMAIL PROTECTED] wrote: I am no scripting pro, but I would imagine it would be fairly simple to create a script and batch it to make symlinks in all subdirectories. I've done something similar using NFS aggregation products. The real problem is when you export,

Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Joe Little
build 57. Jeff On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote: So, I attempting to find the inode from the result of a zpool status -v: errors: The following persistent errors have been detected: DATASET OBJECT RANGE cc 21e382 lvl=0 blkid=0 Well

[zfs-discuss] zfs corruption -- odd inum?

2007-02-10 Thread Joe Little
So, I attempting to find the inode from the result of a zpool status -v: errors: The following persistent errors have been detected: DATASET OBJECT RANGE cc 21e382 lvl=0 blkid=0 Well, 21e282 appears to not be a valid number for find . -inum blah Any suggestions?

Re: Re[2]: [zfs-discuss] 118855-36 ZFS

2007-02-05 Thread Joe Little
On 2/5/07, Robert Milkowski [EMAIL PROTECTED] wrote: Hello Casper, Monday, February 5, 2007, 2:32:49 PM, you wrote: Hello zfs-discuss, I've patched U2 system to 118855-36. Several zfs related bugs id should be covered between -19 and -36 like HotSpare support. However despite -36 is

Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-02-01 Thread Joe Little
On 2/1/07, Al Hopper [EMAIL PROTECTED] wrote: On Thu, 1 Feb 2007, Tom Buskey wrote: [i] I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair and I can use it if I upgrade my server for

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Joe Little
On 1/24/07, Jonathan Edwards [EMAIL PROTECTED] wrote: On Jan 24, 2007, at 09:25, Peter Eriksson wrote: too much of our future roadmap, suffice it to say that one should expect much, much more from Sun in this vein: innovative software and innovative hardware working together to deliver

Re: [zfs-discuss] What SATA controllers are people using for ZFS?

2006-12-21 Thread Joe Little
and specific models, and the driver used? Looks like there may be stability issues with the marvell, which appear to go unanswered.. On 12/21/06, Jason J. W. Williams [EMAIL PROTECTED] wrote: Hi Naveen, I believe the newer LSI cards work pretty well with Solaris. Best Regards, Jason On

Re: [zfs-discuss] What SATA controllers are people using for ZFS?

2006-12-21 Thread Joe Little
On 12/21/06, Al Hopper [EMAIL PROTECTED] wrote: On Thu, 21 Dec 2006, Joe Little wrote: and specific models, and the driver used? Looks like there may be stability issues with the marvell, which appear to go unanswered.. I've tested a box running two Marvell based 8-port controllers (which

[zfs-discuss] B54 and marvell cards

2006-12-20 Thread Joe Little
We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs volumes. So, only a power cycle works. In both cases, we get this: Dec 20

[zfs-discuss] Re: B54 and marvell cards

2006-12-20 Thread Joe Little
On 12/20/06, Joe Little [EMAIL PROTECTED] wrote: We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs volumes. So, only a power

[zfs-discuss] Re: B54 and marvell cards

2006-12-20 Thread Joe Little
Some further joy: http://bugs.opensolaris.org/view_bug.do?bug_id=6504404 On 12/20/06, Joe Little [EMAIL PROTECTED] wrote: On 12/20/06, Joe Little [EMAIL PROTECTED] wrote: We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log

Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-22 Thread Joe Little
On 11/22/06, Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] wrote: On Nov 22, 2006, at 4:11 PM, Al Hopper wrote: No problem there! ZFS rocks. NFS/ZFS is a bad combination. Has anyone tried sharing a ZFS fs using samba or afs or something else besides nfs? Do we have the same issues?

Re: [zfs-discuss] Best version of Solaris 10 fro ZFS ?

2006-10-27 Thread Joe Little
The latest OpenSolaris release? Perhaps Nexenta in the end is the way to best deliver/maintain that. On 10/27/06, David Blacklock [EMAIL PROTECTED] wrote: What is the current recommended version of Solaris 10 for ZFS ? -thanks, -Dave ___ zfs-discuss

Re: Re: [zfs-discuss] marvel cards.. as recommended

2006-09-13 Thread Joe Little
On 9/12/06, James C. McPherson [EMAIL PROTECTED] wrote: Joe Little wrote: So, people here recommended the Marvell cards, and one even provided a link to acquire them for SATA jbod support. Well, this is what the latest bits (B47) say: Sep 12 13:51:54 vram marvell88sx: [ID 679681

Re: [zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Joe Little
Yeah. I got the message from a few others, and we are hoping to return/buy the newer one. I've sort of surprised by the limited set of SATA RAID or JBOD cards that one can actually use. Even the one's linked to on this list sometimes aren't supported :). I need to get up and running like

[zfs-discuss] marvel cards.. as recommended

2006-09-12 Thread Joe Little
So, people here recommended the Marvell cards, and one even provided a link to acquire them for SATA jbod support. Well, this is what the latest bits (B47) say: Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING: marvell88sx0: Could not attach, unsupported chip stepping or unable

[zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-24 Thread Joe Little
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high

Re: Re: [zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-24 Thread Joe Little
On 8/24/06, Matthew Ahrens [EMAIL PROTECTED] wrote: On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote: We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We

Re: Re: [zfs-discuss] multi-layer ZFS filesystems and exporting: my stupid question for the day

2006-08-16 Thread Joe Little
On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote: On August 16, 2006 10:25:18 AM -0700 Joe Little [EMAIL PROTECTED] wrote: Is there a way to allow simple export commands the traverse multiple ZFS filesystems for exporting? I'd hate to have to have hundreds of mounts required for every point

Re: Re: Re: [zfs-discuss] multi-layer ZFS filesystems and exporting: my stupid question for the day

2006-08-16 Thread Joe Little
On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote: On August 16, 2006 10:34:31 AM -0700 Joe Little [EMAIL PROTECTED] wrote: On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote: On August 16, 2006 10:25:18 AM -0700 Joe Little [EMAIL PROTECTED] wrote: Is there a way to allow simple export commands

Re: Re: [zfs-discuss] ZFS vs. Apple XRaid

2006-08-01 Thread Joe Little
and some major penalties for streaming writes of various sizes with the NFS implementation and its fsync happiness (3 fsyncs per write from an NFS client). Its all very true that its stable/safe, but its also very slow in various use cases! On 8/1/06, eric kustarz [EMAIL PROTECTED] wrote: Joe Little

Re: Re: [zfs-discuss] ZFS vs. Apple XRaid

2006-07-31 Thread Joe Little
On 7/31/06, Dale Ghent [EMAIL PROTECTED] wrote: On Jul 31, 2006, at 8:07 PM, eric kustarz wrote: The 2.6.x Linux client is much nicer... one thing fixed was the client doing too many commits (which translates to fsyncs on the server). I would still recommend the Solaris client but i'm sure

Re: [zfs-discuss] The ZFS Read / Write roundabout

2006-07-01 Thread Joe Little
I've always seen this curve in my tests (local disk or iscsi) and just think its zfs as designed. I haven't seen much parallelism when I have multiple i/o jobs going, the filesystem seems to go mostly into one or the other mode. Perhaps per vdev (in iscsi I'm only exposing one or two), there is

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Joe Little
On 6/27/06, Erik Trimble [EMAIL PROTECTED] wrote: Darren J Moffat wrote: Peter Rival wrote: storage arrays with the same arguments over and over without providing an answer to the customer problem doesn't do anyone any good. So. I'll restate the question. I have a 10TB database that's

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-24 Thread Joe Little
To clarify what has just been stated. With zil disabled I got 4MB/sec. With zil enabled I get 1.25MB/sec On 6/23/06, Tao Chen [EMAIL PROTECTED] wrote: On 6/23/06, Roch [EMAIL PROTECTED] wrote: On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote: On 6/22/06, Jeff Bonwick [EMAIL

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-23 Thread Joe Little
On 6/23/06, Roch [EMAIL PROTECTED] wrote: Joe Little writes: On 6/22/06, Bill Moore [EMAIL PROTECTED] wrote: Hey Joe. We're working on some ZFS changes in this area, and if you could run an experiment for us, that would be great. Just do this: echo 'zil_disable/W1' | mdb -kw

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else? I think it would still be ideal to allow tweaking of things at runtime to make 32-bit systems more ideal. On 6/21/06, Mark Maybee [EMAIL PROTECTED] wrote: Yup, your probably running up against the limitations of 32-bit

Re: Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
On 6/22/06, Darren J Moffat [EMAIL PROTECTED] wrote: Rich Teer wrote: On Thu, 22 Jun 2006, Joe Little wrote: Please don't top post. What if your 32bit system is just a NAS -- ZFS and NFS, nothing else? I think it would still be ideal to allow tweaking of things at runtime to make 32-bit

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Joe Little
On 6/22/06, Jeff Bonwick [EMAIL PROTECTED] wrote: a test against the same iscsi targets using linux and XFS and the NFS server implementation there gave me 1.25MB/sec writes. I was about to throw in the towel and deem ZFS/NFS has unusable until B41 came along and at least gave me 1.25MB/sec.

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
I guess the only hope is to find pin-compatible Xeons that are 64bit to replace what is a large chassis with 24 slots of disks that has specific motherboard form-factor, etc. We have 6 of these things from a government grant that must be used for the stated purpose. So, yes, we can buy product,

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Joe Little
don't have time, no big deal. --Bill On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote: On 6/22/06, Jeff Bonwick [EMAIL PROTECTED] wrote: a test against the same iscsi targets using linux and XFS and the NFS server implementation there gave me 1.25MB/sec writes. I was about

[zfs-discuss] zfs going out to lunch

2006-06-02 Thread Joe Little
I've been writing via tar to a pool some stuff from backup, around 500GB. Its taken quite a while as the tar is being read from NFS. My ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA drives (sil3124 card) Ever once in a while, a df stalls and during that time my io's go

Re: [zfs-discuss] ZFS performance metric/cookbook/whitepaper

2006-06-01 Thread Joe Little
Please add to the list the differences on locally or remotely attach vdevs: FC, SCSI/SATA, or iSCSI. This is the part that is troubling me most, as there are wildly different performance characteristics when you use NFS with any of these backends with the various configs of ZFS. Another thing is

Re: Re[2]: [zfs-discuss] cluster features

2006-05-31 Thread Joe Little
Well, here's my previous summary off list to different solaris folk (regarding NFS serving via ZFS and iSCSI): I want to use ZFS as a NAS with no bounds on the backing hardware (not restricted to one boxes capacity). Thus, there are two options: FC SAN or iSCSI. In my case, I have multi-building

Re: [zfs-discuss] cluster features

2006-05-30 Thread Joe Little
Well, I would caution at this point against the iscsi backend if you are planning on using NFS. We took a long winded conversation online and have yet to return to this list, but the gist of it is that the latency of iscsi along with the tendency for NFS to fsync 3 times per write causes

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-11 Thread Joe Little
well, here's my first pass result: [EMAIL PROTECTED] loges1]# time tar xf /root/linux-2.2.26.tar real114m6.662s user0m0.049s sys 0m1.354s On 5/11/06, Roch Bourbonnais - Performance Engineering [EMAIL PROTECTED] wrote: Joe Little writes: How did you get the average time

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-08 Thread Joe Little
, XFS, JFS, etc which I've tested before) On 5/8/06, Nicolas Williams [EMAIL PROTECTED] wrote: On Fri, May 05, 2006 at 11:55:17PM -0500, Spencer Shepler wrote: On Fri, Joe Little wrote: Thanks. I'm playing with it now, trying to get the most succinct test. This is one thing that bothers me

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
Thanks for the tip. In the local case, I could send to the iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time of 50seconds (17 seconds better than UFS). However, I didn't even both finishing the NFS client test, since it was taking a few seconds between multiple 27K files. So,

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
such a different code path? On 5/5/06, Lisa Week [EMAIL PROTECTED] wrote: These may help: http://opensolaris.org/os/community/dtrace/scripts/ Check out iosnoop.d http://www.solarisinternals.com/si/dtrace/index.php Check out iotrace.d - Lisa Joe Little wrote On 05/05/06 18:59

Re: [zfs-discuss] Poor directory traversal or small file performance?

2006-05-04 Thread Joe Little
This was fixed in snv_39. Joe Little wrote On 05/04/06 15:47,: I've been writing to the Solaris NFS list since I was getting some bad performance copying via NFS (noticeably there) a large set of small files. We have various source trees, including a tree with many linux versions that I