Re: [zfs-discuss] ZFS extended ACL

2009-01-30 Thread Joerg Schilling
Volker A. Brandt v...@bb-c.de wrote: Given the massive success of GNU based systems (Linux, OS X, *BSD) Ouch! Neither OSX nor *BSD are GNU-based. They do ship with GNU-related things but that's been a long and hard battle. While you are true, this isn't going to help on. Let me try to

Re: [zfs-discuss] ZFS extended ACL

2009-01-30 Thread Volker A. Brandt
Ouch! Neither OSX nor *BSD are GNU-based. They do ship with GNU-related things but that's been a long and hard battle. While you are true, this isn't going to help on. I agree. I see three possible types of Linux users that should be discussed. 1)The really dumb Linux users. These

[zfs-discuss] RFE: parsable iostat and zpool layout

2009-01-30 Thread Pål Baltzersen
I would like zpool iostat to take a -p option to output parsable statistics with absolute counters/figures that for example could be fed to MRTG, RRD, et al. The zpool iostat [-v] POOL 60 [N] is great for humans but not very api-friendly; N=2 is a bit overkill and unreliable. Is this info

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting

2009-01-30 Thread Pål Baltzersen
Take the new disk out as well.. foreign/bad non-zero disk label may cause trouble too. I've experienced tool core dumps with foreign disk (partition) label which might be the case if it is a recycled replacement disk (In my case fixed by plugging the disk it into a linux desktop and blanking

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
A Linux NFS file server, with a few terabytes of fibre-attached disk, using XFS. I'm trying to get these Thors to perform at least as well as the current setup. A performance hit is very hard to explain to our users. Perhaps I missed something, but what was your previous setup? I.e. what did

Re: [zfs-discuss] ZFS extended ACL

2009-01-30 Thread Tim
On Fri, Jan 30, 2009 at 3:55 AM, Volker A. Brandt v...@bb-c.de wrote: Hmmm... I don't think a Linux user can be really dumb. He/she would not run Linux, but a certain other system. :-) My mother just ordered a netbook that came with ubuntu. She can barely handle turning a system on. So

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Tim
On Fri, Jan 30, 2009 at 8:24 AM, Greg Mason gma...@msu.edu wrote: A Linux NFS file server, with a few terabytes of fibre-attached disk, using XFS. I'm trying to get these Thors to perform at least as well as the current setup. A performance hit is very hard to explain to our users. What

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
I should also add that this creating many small files issue is the ONLY case where the Thors are performing poorly, which is why I'm focusing on it. Greg Mason wrote: A Linux NFS file server, with a few terabytes of fibre-attached disk, using XFS. I'm trying to get these Thors to perform

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
This problem only manifests itself when dealing with many small files over NFS. There is no throughput problem with the network. But there could be a _latency_ issue with the network. [snip] I've done my homework on this issue, I've ruled out the network as an issue, as well as the NFS

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
Jim Mauro wrote: This problem only manifests itself when dealing with many small files over NFS. There is no throughput problem with the network. But there could be a _latency_ issue with the network. If there was a latency issue, we would see such a problem with our existing file server

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
If there was a latency issue, we would see such a problem with our existing file server as well, which we do not. We'd also have much greater problems than just file server performance. So, like I've said, we've ruled out the network as an issue. I should also add that I've tested these

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
You have SSD's for the ZIL (logzilla) enabled, and ZIL IO is what is hurting your performance...Hmmm I'll ask the stupid question (just to get it out of the way) - is it possible that the logzilla is undersized? Did you gather data using Richard Elling's zilstat (included below)? Thanks,

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
I'll give this a script a shot a little bit later today. For ZIL sizing, I'm using either 1 or 2 32G Intel X25-E SSDs in my tests, which, according to what I've read, is 2-4 times larger than the maximum that ZFS can possibly use. We've got 32G of system memory in these Thors, and (if I'm not

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-30 Thread Jim Mauro
Sogranted, tank is about 77% full (not to split hairs ;^), but in this case, 23% is 640GB of free space. I mean, it's not like 15 years ago when a file system was 2GB total, and 23% free meant a measely 460MB to allocate from. 640GB is a lot of space, and our largest writes are less than 5MB.

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Bob Friesenhahn
On Fri, 30 Jan 2009, Greg Mason wrote: A Linux NFS file server, with a few terabytes of fibre-attached disk, using XFS. I'm trying to get these Thors to perform at least as well as the current setup. A performance hit is very hard to explain to our users. I have heard that Linux NFS service

[zfs-discuss] Zpool vdev retains old name

2009-01-30 Thread Uncle bob
Hello All, I recently upgrade a test system that had a zpool (test_pool) from S10u5 to S10U6-zfsroot by simply replacing the root disks. I exported the zpool before I init 5'ed the system. On S10u5, the zpool vdevs were on c2t#d#. On S10U6-zfsroot, the zpool vdevs were on c4t#d#. I ran

[zfs-discuss] can't import pool after forced reboot

2009-01-30 Thread Frank Cusack
so ... i hate USB as well. i guess i'll have to get a SAS or fibre enclosure. (even though i only need USB2 performance.) i hot plugged a drive into my USB2 enclosure. i was adding and removing drives earlier just fine, but this time all (both) disks in the enclosure became unavailable. i

[zfs-discuss] set mountpoint but don't mount?

2009-01-30 Thread Frank Cusack
i made a mistake and created my zpool on a partition (c2t0d0p0). i can't attach another identical whole drive (c3t0d0) to this pool, i get an error that the new drive is too small (i'd have thought it would be bigger!) the mount point of the top dataset is 'none', and various datasets in the

[zfs-discuss] can't create new pool: another disk has a zpool active?

2009-01-30 Thread Frank Cusack
# rmformat Looking for devices... 1. Logical Node: /dev/rdsk/c3t0d0p0 Physical Node: /p...@0,0/pci108e,c...@2,1/stor...@1/d...@0,0 Connected Device: Ext Hard Disk Device Type: Removable 2. Logical Node: /dev/rdsk/c2t0d0p0 Physical Node:

Re: [zfs-discuss] set mountpoint but don't mount?

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Frank Cusack wrote: so, is there a way to tell zfs not to perform the mounts for data2? or another way i can replicate the pool on the same host, without exporting the original pool? There is not a way to do that currently, but I know it's coming down the road.

Re: [zfs-discuss] can't import pool after forced reboot

2009-01-30 Thread Frank Cusack
On January 30, 2009 9:52:59 AM -0800 Frank Cusack fcus...@fcusack.com wrote: pool: data2 state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using

Re: [zfs-discuss] ZFS extended ACL

2009-01-30 Thread Miles Nordin
fm == Fredrich Maney fredrichma...@gmail.com writes: fm changing the default toolset (without notification) I wouldn't wish for notification all the time and tell people they cannot move unless they notify everyone, or you will get a bunch of CYA disclaimers and still have no input. And

Re: [zfs-discuss] can't import pool after forced reboot

2009-01-30 Thread Richard Elling
Frank Cusack wrote: On January 30, 2009 9:52:59 AM -0800 Frank Cusack fcus...@fcusack.com wrote: pool: data2 state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Richard Elling
Jim Mauro wrote: You have SSD's for the ZIL (logzilla) enabled, and ZIL IO is what is hurting your performance...Hmmm I'll ask the stupid question (just to get it out of the way) - is it possible that the logzilla is undersized? Did you gather data using Richard Elling's zilstat

Re: [zfs-discuss] ZFS benchmarking

2009-01-30 Thread Richard Elling
Ruslan Valiyev wrote: Hi all, I have couple of questions regarding a ZFS setup I have at home. It's six SATA disks, set up as two groups with three disks with raidz1 in each one. Here are some graphs I've made: http://job.valiyev.net/gnuplot/zfs/ The client is a Mac, I'm using NFS with

[zfs-discuss] j4200 drive carriers

2009-01-30 Thread Frank Cusack
apparently if you don't order a J4200 with drives, you just get filler sleds that won't accept a hard drive. (had to look at a parts breakdown on sunsolve to figure this out -- the docs should simply make this clear.) it looks like the sled that will accept a drive is part #570-1182. anyone know

Re: [zfs-discuss] j4200 drive carriers

2009-01-30 Thread Sean Sprague
Frank, apparently if you don't order a J4200 with drives, you just get filler sleds that won't accept a hard drive. (had to look at a parts breakdown on sunsolve to figure this out -- the docs should simply make this clear.) it looks like the sled that will accept a drive is part #570-1182.

Re: [zfs-discuss] ZFS with Rational (ClearCase VOB) Supported ???

2009-01-30 Thread Tom Buskey
I'm running ClearCase on a Solaris 10u4 system. Views vobs. I lock the vob, snapshot /var/adm/rational, vobs, views, then unlock the vobs. We've been able to copy the snapshot to another server restore. I believe ClearCase is supported by Rational on ZFS also. We would not have done it

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Roch Bourbonnais
Sounds like the device it not ignoring the cache flush requests sent down by ZFS/zil commit. If the SSD is able the drain it's internal buffer to flash on a power outage; then it needs to ignore the cache flush. You can do this on a per device basis, It's kludgy tuning but hope the

Re: [zfs-discuss] j4200 drive carriers

2009-01-30 Thread Frank Cusack
On January 30, 2009 1:31:46 PM -0800 Frank Cusack fcus...@fcusack.com wrote: apparently if you don't order a J4200 with drives, you just get filler sleds that won't accept a hard drive. (had to look at a parts breakdown on sunsolve to figure this out -- the docs should simply make this

[zfs-discuss] ZFS root mirror / moving disks to machine with different hostid

2009-01-30 Thread Marcus Reid
Hello, My apologies if this has been discussed before or if this is the wrong place to discuss Solaris 10 U6 issues.. I am investigating using ZFS as a possible replacement for SVM for root disk mirroring. So far, I have installed the system with the new ZFS option in the text installer of U6.

Re: [zfs-discuss] RFE: parsable iostat and zpool layout

2009-01-30 Thread Mark J Musante
Hi Pål, CR 6420274 covers the -p part of your question. As far as kstats go, we only have them in the arc and the vdev read-ahead cache. Regards, markm -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Hang on zfs import - build 107

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Ed Kaczmarek wrote: And/or step me thru the required mdb/kdb/whatever it's called stack trace dump command sequence after booting with -kd Dan Mick's got a good guide on his blog: http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with Regards, markm

Re: [zfs-discuss] zpool status -x strangeness

2009-01-30 Thread Blake
Maybe ZFS hasn't seen an error in a long enough time that it considers the pool healthy? You could try clearing the pool and then observing. On Wed, Jan 28, 2009 at 9:40 AM, Ben Miller mil...@eecis.udel.edu wrote: # zpool status -xv all pools are healthy Ben What does 'zpool status -xv'

[zfs-discuss] how to set mountpoint to default?

2009-01-30 Thread Frank Cusack
zfs set only seems to accept an absolute path, which even if you set it to the name of the pool, isn't quite the same thing as the default. see my other thread about set mountpoint but don't mount?. ___ zfs-discuss mailing list

Re: [zfs-discuss] can't create new pool: another disk has a zpool active?

2009-01-30 Thread Frank Cusack
On January 30, 2009 10:03:42 AM -0800 Frank Cusack fcus...@fcusack.com wrote: # zpool create data3 c3t0d1 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c2t0d0s2 is part of active ZFS pool data. Please see zpool(1M). /dev/dsk/c2t0d0s8 is part of active ZFS pool

Re: [zfs-discuss] j4200 drive carriers

2009-01-30 Thread Richard Elling
Frank Cusack wrote: apparently if you don't order a J4200 with drives, you just get filler sleds that won't accept a hard drive. (had to look at a parts breakdown on sunsolve to figure this out -- the docs should simply make this clear.) it looks like the sled that will accept a drive is

[zfs-discuss] Introducing zilstat

2009-01-30 Thread Richard Elling
For those who didn't follow down the thread this afternoon, I have posted a tool call zilstat which will help you to answer the question of whether a separate log might help your workload. Details start here: http://richardelling.blogspot.com/2009/01/zilstat.html Enjoy! -- richard

Re: [zfs-discuss] can't create new pool: another disk has a zpool active?

2009-01-30 Thread Frank Cusack
On January 30, 2009 4:51:36 PM -0800 Frank Cusack fcus...@fcusack.com wrote: later on when i am done with the new pool (it's temporary space) i will destroy it and try to recreate it and see if i get the same error. yup. this time i couldn't attach. # zpool status | grep c.t.d.