Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Jorgen Lundman
We just picked up the fastest SSD we could in the local biccamera, which turned out to be a CSSDーSM32NI, with supposedly 95MB/s write speed. I put it in place, and replaced the slog over: 0m49.173s 0m48.809s So, it is slower than the CF test. This is disappointing. Everyone else

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Ross
Everyone else should be using the Intel X25-E. There's a massive difference between the M and E models, and for a slog it's IOPS and low latency that you need. I've heard that Sun use X25-E's, but I'm sure that original reports had them using STEC. I have a feeling the 2nd generation

[zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Pavel Kovalenko
after several errors on QLogic HBA pool cache was damaged and zfs cannot import pool, there is no any disk or cpu activity during import... #uname -a SunOS orion 5.11 snv_111b i86pc i386 i86pc # zpool import pool: data1 id: 6305414271646982336 state: ONLINE status: The pool was last

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Victor Latushkin
On 29.07.09 13:04, Pavel Kovalenko wrote: after several errors on QLogic HBA pool cache was damaged and zfs cannot import pool, there is no any disk or cpu activity during import... #uname -a SunOS orion 5.11 snv_111b i86pc i386 i86pc # zpool import pool: data1 id: 6305414271646982336

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Ross
Hi James, I'll not reply in line since the forum software is completely munging your post. On the X25-E I believe there is cache, and it's not backed up. While I haven't tested it, I would expect the X25-E to have the cache turned off while used as a ZIL. The 2nd generation X25-E announced

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-29 Thread Constantin Gonzalez
Hi, thank you so much for this post. This is exactly what I was looking for. I've been eyeing the M3A76-CM board, but will now look at 78 and M4A as well. Actually, not that many Asus M3A, let alone M4A boards show up yet on the OpenSolaris HCL, so I'd like to encourage everyone to share their

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Pavel Kovalenko
fortunately, after several hours terminal went back -- # zdb -e data1 Uberblock magic = 00bab10c version = 6 txg = 2682808 guid_sum = 14250651627001887594 timestamp = 1247866318 UTC = Sat Jul 18 01:31:58 2009 Dataset mos [META], ID 0, cr_txg 4,

[zfs-discuss] resizing zpools by growing LUN

2009-07-29 Thread Jan
Hi all, I need to know if it is possible to expand the capacity of a zpool without loss of data by growing the LUN (2TB) presented from an HP EVA to a Solaris 10 host. I know that there is a possible way in Solaris Express Community Edition, b117 with the autoexpand property. But I still work

Re: [zfs-discuss] Set New File/Folder ZFS ACLs Automatically through Samba?

2009-07-29 Thread Thomas Nau
Jeff, On Tue, 28 Jul 2009, Jeff Hulen wrote: Do any of you know how to set the default ZFS ACLs for newly created files and folders when those files and folders are created through Samba? I want to have all new files and folders only inherit extended (non-trivial) ACLs that are set on the

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Victor Latushkin
On 29.07.09 14:42, Pavel Kovalenko wrote: fortunately, after several hours terminal went back -- # zdb -e data1 Uberblock magic = 00bab10c version = 6 txg = 2682808 guid_sum = 14250651627001887594 timestamp = 1247866318 UTC = Sat Jul 18 01:31:58

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Markus Kovero
I recently noticed that importing larger pools that are occupied by large amounts of data can do zpool import for several hours while zpool iostat only showing some random reads now and then and iostat -xen showing quite busy disk usage, It's almost it goes thru every bit in pool before it goes

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Pavel Kovalenko
Victor, after # ps -ef | grep zdb | grep -v grep root 3281 1683 1 14:22:09 pts/2 8:57 zdb -e -t 2682807 data1 i've inserted pid after 0t: # echo 0t3281::pid2proc|::walk thread|::findstack -v | mdb -k mdb-k0t3281 and got a couple of records: stack pointer for thread

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante
On Tue, 28 Jul 2009, Glen Gunselman wrote: # zpool list NAME SIZE USED AVAILCAP HEALTH ALTROOT zpool1 40.8T 176K 40.8T 0% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT zpool1 364K 32.1T 28.8K /zpool1 This is normal, and

Re: [zfs-discuss] [indiana-discuss] zfs issues?

2009-07-29 Thread James Lever
On 29/07/2009, at 12:00 AM, James Lever wrote: CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub rpool causes zpool hang This bug I logged has been marked as related to CR 6843235 which is fixed in snv 119. cheers, James ___

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
IIRC zpool list includes the parity drives in the disk space calculation and zfs list doesn't. Terabyte drives are more likely 900-something GB drives thanks to that base-2 vs. base-10 confusion HD manufacturers introduced. Using that 900GB figure I get to both 40TB and 32TB for with and

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
Here is the output from my J4500 with 48 x 1 TB disks. It is almost the exact same configuration as yours. This is used for Netbackup. As Mario just pointed out, zpool list includes the parity drive in the space calculation whereas zfs list doesn't. [r...@xxx /]# zpool status Scoot,

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
This is normal, and admittedly somewhat confusing (see CR 6308817). Even if you had not created the additional zfs datasets, it still would have listed 40T and 32T. Mark, Thanks for the examples. Where would I see CR 6308817 my usual search tools aren't find it. Glen -- This

[zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon
What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new filesystem. The new filesystems have default/inherited

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen
On Wed, 29 Jul 2009, Andriy Gapon wrote: Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new filesystem. The new filesystems have default/inherited

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, Glen Gunselman wrote: Where would I see CR 6308817 my usual search tools aren't find it. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817 Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat
Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new filesystem. The new filesystems have

Re: [zfs-discuss] feature proposal

2009-07-29 Thread David Magda
On Wed, July 29, 2009 10:24, Andre van Eyssen wrote: It'd either require major surgery to userland tools, including every single program that might want to create a directory, or major surgery to the kernel. The former is unworkable, the latter .. scary. How about: add a flag (-Z?) to

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat
David Magda wrote: On Wed, July 29, 2009 10:24, Andre van Eyssen wrote: It'd either require major surgery to userland tools, including every single program that might want to create a directory, or major surgery to the kernel. The former is unworkable, the latter .. scary. How about: add a

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon
on 29/07/2009 17:24 Andre van Eyssen said the following: On Wed, 29 Jul 2009, Andriy Gapon wrote: Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen
On Wed, 29 Jul 2009, David Magda wrote: Which makes me wonder: is there a programmatic way to determine if a path is on ZFS? statvfs(2) -- Andre van Eyssen. mail: an...@purplecow.org jabber: an...@interact.purplecow.org purplecow.org: UNIX for the masses http://www2.purplecow.org

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, David Magda wrote: Which makes me wonder: is there a programmatic way to determine if a path is on ZFS? Yes, if it's local. Just use df -n $path and it'll spit out the filesystem type. If it's mounted over NFS, it'll just say something like nfs or autofs, though.

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen
On Wed, 29 Jul 2009, Andriy Gapon wrote: Well, I specifically stated that this property should not be recursive, i.e. it should work only in a root of a filesystem. When setting this property on a filesystem an administrator should carefully set permissions to make sure that only trusted

[zfs-discuss] Strange errors in zpool scrub, Solaris 10u6 x86_64

2009-07-29 Thread Jim Klimov
I did a zpool scrub recently, and while it was running it reported errors and woed about restoring from backup. When the scrub is complete, it reports finishing with 0 errors though. On the next scrub some other errors are reported in different files. iostat -xne does report a few errors (1

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen
On Wed, 29 Jul 2009, Mark J Musante wrote: Yes, if it's local. Just use df -n $path and it'll spit out the filesystem type. If it's mounted over NFS, it'll just say something like nfs or autofs, though. $ df -n /opt Filesystemkbytesused avail capacity Mounted on

[zfs-discuss] LVM and ZFS

2009-07-29 Thread Peter Eriksson
I'm curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about Bus and lots of stalled I/O. But as soon as I wrap that device inside an LVM metadevice

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Victor Latushkin
On 29.07.09 16:59, Mark J Musante wrote: On Tue, 28 Jul 2009, Glen Gunselman wrote: # zpool list NAME SIZE USED AVAILCAP HEALTH ALTROOT zpool1 40.8T 176K 40.8T 0% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT zpool1 364K 32.1T

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Kyle McDonald
Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new filesystem. The new filesystems have

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat
Andre van Eyssen wrote: On Wed, 29 Jul 2009, Andriy Gapon wrote: Well, I specifically stated that this property should not be recursive, i.e. it should work only in a root of a filesystem. When setting this property on a filesystem an administrator should carefully set permissions to make

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat
Kyle McDonald wrote: Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a new filesystem.

Re: [zfs-discuss] zpool export taking hours

2009-07-29 Thread George Wilson
fyleow wrote: fyleow wrote: I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it's still shown as mounted. What's going

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Nicolas Williams
On Wed, Jul 29, 2009 at 03:35:06PM +0100, Darren J Moffat wrote: Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Kyle McDonald
Darren J Moffat wrote: Kyle McDonald wrote: Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Ross
I can think of a different feature where this would be useful - storing virtual machines. With an automatic 1fs per folder, each virtual machine would be stored in its own filesystem, allowing for rapid snapshots, and instant restores of any machine. One big limitation for me of zfs is that

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Bob Friesenhahn
On Wed, 29 Jul 2009, Jorgen Lundman wrote: So, it is slower than the CF test. This is disappointing. Everyone else seems to use Intel X25-M, which have a write-speed of 170MB/s (2nd generation) so perhaps that is why it works better for them. It is curious that it is slower than the CF card.

Re: [zfs-discuss] zfs send/recv syntax

2009-07-29 Thread Joseph L. Casale
I apologize for replying in the middle of this thread, but I never saw the initial snapshot syntax of mypool2, which needs to be recursive (zfs snapshot -r mypo...@snap) to snapshot all the datasets in mypool2. Then, use zfs send -R to pick up and restore all the dataset properties. What was the

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Michael Schuster
On 29.07.09 07:56, Andre van Eyssen wrote: On Wed, 29 Jul 2009, Mark J Musante wrote: Yes, if it's local. Just use df -n $path and it'll spit out the filesystem type. If it's mounted over NFS, it'll just say something like nfs or autofs, though. $ df -n /opt Filesystemkbytes

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-29 Thread Richard Elling
On Jul 28, 2009, at 6:34 PM, Eric D. Mudama wrote: On Mon, Jul 27 at 13:50, Richard Elling wrote: On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote: Can *someone* please name a single drive+firmware or RAID controller+firmware that ignores FLUSH CACHE / FLUSH CACHE EXT commands? Or worse,

[zfs-discuss] Tunable iSCSI timeouts - ZFS over iSCSI fix

2009-07-29 Thread Dave
Anyone (Ross?) creating ZFS pools over iSCSI connections will want to pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk problems: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=649 Yay! ___ zfs-discuss mailing

Re: [zfs-discuss] Tunable iSCSI timeouts - ZFS over iSCSI fix

2009-07-29 Thread Ross Smith
Yup, somebody pointed that out to me last week and I can't wait :-) On Wed, Jul 29, 2009 at 7:48 PM, Davedave-...@dubkat.com wrote: Anyone (Ross?) creating ZFS pools over iSCSI connections will want to pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk problems:

Re: [zfs-discuss] zfs send/recv syntax

2009-07-29 Thread Ian Collins
Joseph L. Casale wrote: I apologize for replying in the middle of this thread, but I never saw the initial snapshot syntax of mypool2, which needs to be recursive (zfs snapshot -r mypo...@snap) to snapshot all the datasets in mypool2. Then, use zfs send -R to pick up and restore all the dataset

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Scott Lawson
Glen Gunselman wrote: Here is the output from my J4500 with 48 x 1 TB disks. It is almost the exact same configuration as yours. This is used for Netbackup. As Mario just pointed out, zpool list includes the parity drive in the space calculation whereas zfs list doesn't. [r...@xxx /]#

[zfs-discuss] Install and boot from USB stick?

2009-07-29 Thread tore
Hello, Ive tried to find any hard information on how to install, and boot, opensolaris from a USB stick. Ive seen a few people written a few sucessfull stories about this, but I cant seem to get it to work. The procedure: Boot from LiveCD, insert USB drive, find it using `format', start

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon
on 29/07/2009 17:52 Andre van Eyssen said the following: On Wed, 29 Jul 2009, Andriy Gapon wrote: Well, I specifically stated that this property should not be recursive, i.e. it should work only in a root of a filesystem. When setting this property on a filesystem an administrator should

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Roman V Shaposhnik
On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a filesystem, after that every mkdir *in the root* of that filesystem creates a

[zfs-discuss] cleaning up cloned zones

2009-07-29 Thread Anil
I create a couple of zones. I have a zone path like this: r...@vps1:~# zfs list -r zones/cars NAME USED AVAIL REFER MOUNTPOINT zones/fans 1.22G 3.78G22K /zones/fans zones/fans/ROOT 1.22G 3.78G19K legacy zones/fans/ROOT/zbe 1.22G 3.78G 1.22G

Re: [zfs-discuss] cleaning up cloned zones

2009-07-29 Thread Edward Pilatowicz
hey anil, given that things work, i'd recommend leaving them alone. if you really want to insist on cleaning things up aesthetically then you need to do multiple zfs operation and you'll need to shutdown the zones. assuming you haven't cloned any zones (because if you did that complicates

[zfs-discuss] Solaris10+ and Online Media services

2009-07-29 Thread Brandon Barker
It seems like a lot of meida services are starting to catch on about ZFS. I knew Last.fm makes use of it, and I also found out grooveshark (see this blog: http://www.facebook.com/notes.php?id=7354446700start=200hash=fb219332a992a64f12d200435b3d24f2 ). Grooveshark looks nice for end users as

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Pawel Jakub Dawidek
On Wed, Jul 29, 2009 at 05:34:53PM -0700, Roman V Shaposhnik wrote: On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote: What do you think about the following feature? Subdirectory is automatically a new filesystem property - an administrator turns on this magic property of a