Re: [zfs-discuss] firewire card?

2009-01-28 Thread Alan Perry
Which firewire card? Any firewire card that is OHCI compliant, which is almost any add-on firewire card that you would buy new these days. The bigger question is the firewire drive that you want to use or, more precisely, the 1394-to-ATA (or SATA) bridge used by the drive. Some work better

[zfs-discuss] ZFS concat pool

2009-01-28 Thread Peter van Gemert
I have a need to created pool that only concatenates the LUNS assigned to it. The default for a pool is stripe and other possibilities are mirror, raidz and raidz2. Is there any way I can create concat pools. Main reason is that the underlying LUNs are already striping and we do not want to

[zfs-discuss] Is Disabling ARC on SolarisU4 possible?

2009-01-28 Thread Rob Brown
Afternoon, In order to test my storage I want to stop the cacheing effect of the ARC on a ZFS filesystem. I can do similar on UFS by mounting it with the directio flag. I saw the following two options on a nevada box which presumably control it: primarycache secondarycache But I¹m running

Re: [zfs-discuss] zpool status -x strangeness

2009-01-28 Thread Ben Miller
# zpool status -xv all pools are healthy Ben What does 'zpool status -xv' show? On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller mil...@eecis.udel.edu wrote: I forgot the pool that's having problems was recreated recently so it's already at zfs version 3. I just did a 'zfs upgrade -a' for

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Bob Friesenhahn
On Tue, 27 Jan 2009, Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with zfs finding them in the wrong order and then failing to mount correctly? I have not encountered that problem here and I do have a

Re: [zfs-discuss] ZFS concat pool

2009-01-28 Thread Will Murnane
On Wed, Jan 28, 2009 at 07:37, Peter van Gemert opensola...@petervg.nl wrote: Is there any way I can create concat pools. Not that I'm aware of. However, pools that are not redundant at the zpool level (i.e., mirror or raidz{,2}) are prone to becoming irrevocably faulted; creating non-redundant

Re: [zfs-discuss] ZFS concat pool

2009-01-28 Thread Bob Friesenhahn
On Wed, 28 Jan 2009, Peter van Gemert wrote: I have a need to created pool that only concatenates the LUNS assigned to it. The default for a pool is stripe and other possibilities are mirror, raidz and raidz2. Zfs does concatenate vdevs, and load-shares the writes across vdevs. If each

[zfs-discuss] Zpool export failure or interrupt messes up mount ordering?

2009-01-28 Thread Remco Lengers
Hi, I have the following setup that worked fine for a couple of months. (root disk) - zfs rootpool (build 100) (on 2 mirrored data disks:) - datapool/export - datapool/export/home - datapool/export/fotos - datapool/export/fotos/2008 Now I tried to live upgrade from build 100 to 106 things got

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Frank Cusack
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 27 Jan 2009, Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with zfs finding them in the wrong order and

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Richard Elling
Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with zfs finding them in the wrong order and then failing to mount correctly? say you have pool1/data which mounts on /data and pool2/foo which mounts on

Re: [zfs-discuss] ZFS concat pool

2009-01-28 Thread Richard Elling
Peter van Gemert wrote: I have a need to created pool that only concatenates the LUNS assigned to it. The default for a pool is stripe and other possibilities are mirror, raidz and raidz2. Is there any way I can create concat pools. Main reason is that the underlying LUNs are already

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Frank Cusack
On January 28, 2009 9:24:21 AM -0800 Richard Elling richard.ell...@gmail.com wrote: Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with zfs finding them in the wrong order and then failing to mount

Re: [zfs-discuss] Is Disabling ARC on SolarisU4 possible?

2009-01-28 Thread Richard Elling
Rob Brown wrote: Afternoon, In order to test my storage I want to stop the cacheing effect of the ARC on a ZFS filesystem. I can do similar on UFS by mounting it with the directio flag. No, not really the same concept, which is why Roch wrote

[zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()

2009-01-28 Thread Will Murnane
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it's important that they have a simple recourse

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Nicolas Williams
On Wed, Jan 28, 2009 at 09:32:23AM -0800, Frank Cusack wrote: On January 28, 2009 9:24:21 AM -0800 Richard Elling richard.ell...@gmail.com wrote: Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Nicolas Williams
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote: On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 27 Jan 2009, Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem,

[zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Orvar Korvar
I understand Fishworks has a L2ARC cache, which as I have understood it, is a SSD drive as a cache? I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar vein? Would it be easy to do? What would be the impact? Has anyone tried this? -- This message posted from

[zfs-discuss] mounting disks

2009-01-28 Thread Garima Tripathi
Can anyone help me figure this out: I am a new user of ZFS, and recently installed 2008.11 with ZFS. Unfortunately I messed up the system and had to boot using LiveCD. In the legacy systems, it was possible to get to the boot prompt, and then mount the disk containing the / on /mnt, and then

Re: [zfs-discuss] mounting disks

2009-01-28 Thread Ethan Quach
You've got to import the pool first: # zpool import (to see the names of pools available to import) The name of the pool is likely rpool, so # zpool import -f rpool Then you mount your root dataset via zfs, or use the beadm(1M) command to mount it: # beadm list (to see the

Re: [zfs-discuss] mounting disks

2009-01-28 Thread Garima Tripathi
Thanks a lot Ethan - that helped! -Garima Ethan Quach wrote: You've got to import the pool first: # zpool import (to see the names of pools available to import) The name of the pool is likely rpool, so # zpool import -f rpool Then you mount your root dataset via zfs, or use the

Re: [zfs-discuss] zfs send -R slow

2009-01-28 Thread BJ Quinn
What about when I pop in the drive to be resilvered, but right before I add it back to the mirror, will Solaris get upset that I have two drives both with the same pool name? No, you have to do a manual import. What you mean is that if Solaris/ZFS detects a drive with an identical pool name

Re: [zfs-discuss] zfs send -R slow

2009-01-28 Thread Chris Ridd
On 28 Jan 2009, at 19:40, BJ Quinn wrote: What about when I pop in the drive to be resilvered, but right before I add it back to the mirror, will Solaris get upset that I have two drives both with the same pool name? No, you have to do a manual import. What you mean is that if

[zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Jacob Ritorto
Hi, I just said zfs destroy pool/fs, but meant to say zfs destroy pool/junk. Is 'fs' really gone? thx jake ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] usb 2.0 card (was: firewire card?)

2009-01-28 Thread Frank Cusack
ok, how about a 4 port PCIe usb2.0 card that works? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Richard Elling
Orvar Korvar wrote: I understand Fishworks has a L2ARC cache, which as I have understood it, is a SSD drive as a cache? Fishworks is an engineering team, I hear they have many L2ARCs in their lab :-) Yes, the Sun Storage 7000 series systems can have read-optimized SSDs for use as L2ARC

[zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread BJ Quinn
I have two servers set up, with two drives each. The OS is stored on one drive, and the data on the second drive. I have SNDR replication set up between the two servers for the data drive only. I'm running out of space on my data drive, and I'd like to do a simple zpool attach command to add

Re: [zfs-discuss] Unable to destory a pool

2009-01-28 Thread Ramesh Mudradi
bash-3.00# uname -a SunOS opf-01 5.10 Generic_13-01 sun4v sparc SUNW,T5140 It has dual port SAS HBA connected to a dual controller ST2530. Storage is connected to two 5140's. Tried exporting the pool to other node and tried destroying without any luck. thanks ramesh -- This message posted

Re: [zfs-discuss] firewire card?

2009-01-28 Thread Miles Nordin
ap == Alan Perry alan.pe...@sun.com writes: ap the firewire drive that you want to use or, more precisely, ap the 1394-to-ATA (or SATA) bridge for me Oxford 911 worked well, and PL-3507 crashed daily and needed a reboot of the case to come back. Prolific released new firmware, but it

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread Jim Dunham
BJ Quinn wrote: I have two servers set up, with two drives each. The OS is stored on one drive, and the data on the second drive. I have SNDR replication set up between the two servers for the data drive only. I'm running out of space on my data drive, and I'd like to do a simple

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Miles Nordin
fc == Frank Cusack fcus...@fcusack.com writes: fc say you have pool1/data which mounts on /data and pool2/foo fc which mounts on /data/subdir/foo, From the rest of the thread I guess the mounts aren't reordered across pool boundarires, but I have this problem even for mount-ordering

[zfs-discuss] Issue with drive replacement

2009-01-28 Thread Cuyler Dingwell
In the process of replacing a raidz1 of four 500GB drives with four 1.5TB drives on the third one I ran into an interesting issue. The process was to remove the old drive, put the new drive in and let it rebuild. The problem was the third drive I put in had a hardware fault. That caused both

Re: [zfs-discuss] destroy means destroy, right?

2009-01-28 Thread bdebel...@intelesyscorp.com
Recovering Destroyed ZFS Storage Pools. You can use the zpool import -D command to recover a storage pool that has been destroyed. http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Nathan Kroenert
I'm no authority, but I believe it's gone. Some of the others on the list might have some funky thoughts, but I would suggest that if you have already done any other I/O's to the disk that you have likely rolled past the point of no return. Anyone else care to comment? As a side note, I had a

Re: [zfs-discuss] Is Disabling ARC on SolarisU4 possible?

2009-01-28 Thread Nathan Kroenert
Also - My experience with a very small ARC is that your performance will stink. ZFS is an advanced filesystem that IMO makes some assumptions about capability and capacity of current hardware. If you don't give what it's expecting, your results may be equally unexpected. If you are keen to

Re: [zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Nathan Kroenert
He's not trying to recover a pool - Just a filesystem... :) bdebel...@intelesyscorp.com wrote: Recovering Destroyed ZFS Storage Pools. You can use the zpool import -D command to recover a storage pool that has been destroyed. http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view --

Re: [zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Nicolas Williams
On Wed, Jan 28, 2009 at 02:11:54PM -0800, bdebel...@intelesyscorp.com wrote: Recovering Destroyed ZFS Storage Pools. You can use the zpool import -D command to recover a storage pool that has been destroyed. http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view But the OP destroyed a

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread BJ Quinn
The means to specify this is sndradm -nE ..., when 'E' is equal enabled. Got it. Nothing on the disk, nothing to replicate (yet). The manner in which SNDR can guarantee that two or more volumes are write-order consistent, as they are replicated is place them in the same I/O consistency group.

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Mark J Musante
On Wed, 28 Jan 2009, Richard Elling wrote: Orvar Korvar wrote: I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar vein? Would it be easy to do? Yes. To be specific, you use the 'cache' argument to zpool, as in: zpool create pool ... cache cache-device

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Cindy . Swearingen
Orvar, In an existing RAIDZ configuration, you would add the cache device like this: # zpool add pool-name cache device-name Currently, cache devices are only supported in the OpenSolaris and SXCE releases. The important thing is determining whether the cache device would improve your

Re: [zfs-discuss] mount race condition?

2009-01-28 Thread Kyle McDonald
On 1/28/2009 12:16 PM, Nicolas Williams wrote: On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote: On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 27 Jan 2009, Frank Cusack wrote: i was wondering if you have a zfs

Re: [zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()

2009-01-28 Thread Chris Kirby
On Jan 28, 2009, at 11:49 AM, Will Murnane wrote: (on the client workstation) wil...@chasca:~$ dd if=/dev/urandom of=bigfile dd: closing output file `bigfile': Disk quota exceeded wil...@chasca:~$ rm bigfile rm: cannot remove `bigfile': Disk quota exceeded Will, I filed a CR on this

Re: [zfs-discuss] firewire card?

2009-01-28 Thread David Magda
On Jan 28, 2009, at 16:39, Miles Nordin wrote: Oxford 911 seems to describe a brand of chips, not a specific chip, but it's been a good brand, and it's a very old brand for firewire. As an added bonus this chipset allow multiple logins so it can be used to experiment with this like Oracle

Re: [zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Christine Tran
On Wed, Jan 28, 2009 at 5:18 PM, Nathan Kroenert nathan.kroen...@sun.com wrote: As a side note, I had a look for anything that looked like a CR for zfs destroy / undestroy and could not find one. Anyone interested in me submitting an RFE to have something like a zfs undestroy pool/fs

Re: [zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()

2009-01-28 Thread Will Murnane
On Wed, Jan 28, 2009 at 19:04, Chris Kirby chris.ki...@sun.com wrote: On Jan 28, 2009, at 11:49 AM, Will Murnane wrote: (on the client workstation) wil...@chasca:~$ dd if=/dev/urandom of=bigfile dd: closing output file `bigfile': Disk quota exceeded wil...@chasca:~$ rm bigfile rm: cannot

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting

2009-01-28 Thread Brad Hill
Yes. I have disconnected the bad disk and booted with nothing in the slot, and also with known good replacement disk in on the same sata port. Doesn't change anything. Running 2008.11 on the box and 2008.11 snv_101b_rc2 on the LiveCD. I'll give it a shot booting from the latest build and see

[zfs-discuss] ZFS extended ACL

2009-01-28 Thread Christine Tran
What is wrong with this? # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow' Try `chmod --help' for more information. This works in a zone, works on S10u5, does not work on OpenSolaris2008.11. CT

Re: [zfs-discuss] ZFS extended ACL

2009-01-28 Thread Christine Tran
On Wed, Jan 28, 2009 at 11:07 PM, Christine Tran christine.t...@gmail.com wrote: What is wrong with this? # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow' Try `chmod --help' for more information.

Re: [zfs-discuss] ZFS extended ACL

2009-01-28 Thread Ross
Hit this myself. I could be wrong, but from memory I think the paths are ok if you're a normal user, it's just root that's messed up. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org