[zfs-discuss] zvol and zpool

2010-02-25 Thread Milan Shah
Hi, This is what we are trying to understand. Luns are presented to the CDOM and then we create the zpool on them. On top of the zpool zvol is created and then it is presented to the GDOM. zpool list | grep testpool testpool12G 114K 12.0G 0% ONLINE - zfs list | grep testpool

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Michael Schuster
perhaps this helps: http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/ Michael On 02/24/10 20:02, Troy Campbell wrote: http://www.oracle.com/technology/community/sun-oracle-community-continuity.html Half way down it says: Will Oracle

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Peter Tribble
On Thu, Feb 25, 2010 at 8:56 AM, Michael Schuster michael.schus...@sun.com wrote: perhaps this helps: http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/ Not really. It doesn't explain that the page in question was an explanation of how the

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-25 Thread Kjetil Torgrim Homme
tomwaters tomwat...@chadmail.com writes: I created a zfs file system, cloud/movies and shared it. I then filled it with movies and music. I then decided to rename it, so I used rename in the Gnome to change the folder name to media...ie cloud/media. MISTAKE I then noticed the zfs share

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-25 Thread tomwaters
Yes, I am glad that I learned this lesson now, rather than in 6 months when I have re-purposed the exiting drives...makes me all the more committed to maintaining an up to date remote backup. The reality is that I can not afford to mirror the 8TB in the zpool, so I'll balance the risk and just

Re: [zfs-discuss] zvol and zpool

2010-02-25 Thread Milan Shah
Hi james, thanks for the reply, stating this is there a way i can restirct the size of the zvol. So if i have the zvol of 10 GB on the CDOM, which is presented to the LDOM as the disk and then we create the UFS file system on that, but this grows with time and we eve see the situation where it

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Jacob Ritorto
It's a kind gesture to say it'll continue to exist and all, but without commercial support from the manufacturer, it's relegated to hobbyist curiosity status for us. If I even mentioned using an unsupported operating system to the higherups here, it'd be considered absurd. I like free stuff to

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-25 Thread Robert Milkowski
On 17/02/2010 09:55, Robert Milkowski wrote: On 16/02/2010 23:59, Christo Kutrovsky wrote: On ZVOLs it appears the setting kicks in life. I've tested this by turning it off/on and testing with iometer on an exported iSCSI device (iscsitgtd not comstar). I haven't looked at zvol's code

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-25 Thread Francois Dion
On Thu, Feb 25, 2010 at 6:59 AM, tomwaters tomwat...@chadmail.com wrote: Yes, I am glad that I learned this lesson now, rather than in 6 months when I have re-purposed the exiting drives...makes me all the more committed to maintaining an up to date remote backup. The reality is that I can

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-25 Thread tomwaters
Just an update on this, I was seeing high CPU utilisation (100% on all 4 cores) for ~10 seconds every 20 seconds when transfering files to the server using Samba under 133. So I rebooted and selected 111b and I no longer have the issue. Interestingly, the rpool is still in place..as it should

Re: [zfs-discuss] future of OpenSolaris

2010-02-25 Thread Sean Sprague
Bob, On Tue, 23 Feb 2010, Joerg Schilling wrote: and what uname -s reports. It will surely report OrkOS. For OpenSolaris, OracOS - surely there must be Blakes 7 fans in Oracle Corp.? I am glad to be able to contribute positively and constructively to this discussion. Metoo ;-) ...

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-25 Thread Robert Milkowski
On 25/02/2010 12:48, Robert Milkowski wrote: On 17/02/2010 09:55, Robert Milkowski wrote: On 16/02/2010 23:59, Christo Kutrovsky wrote: On ZVOLs it appears the setting kicks in life. I've tested this by turning it off/on and testing with iometer on an exported iSCSI device (iscsitgtd not

Re: [zfs-discuss] future of OpenSolaris

2010-02-25 Thread Chris Ridd
On 25 Feb 2010, at 14:28, Sean Sprague wrote: Bob, On Tue, 23 Feb 2010, Joerg Schilling wrote: and what uname -s reports. It will surely report OrkOS. For OpenSolaris, OracOS - surely there must be Blakes 7 fans in Oracle Corp.? You can see all the working bits courtesy of

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Ross Walker
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni gtirl...@sysdroid.com wrote: On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto jacob.rito...@gmail.com wrote: It's a kind gesture to say it'll continue to exist and all, but without commercial support from the manufacturer, it's relegated to

[zfs-discuss] ZFS replace - many to one

2010-02-25 Thread Chad
I'm looking to migrate a pool from using multiple smaller LUNs to one larger LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know how to do this? It needs to be non disruptive. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS replace - many to one

2010-02-25 Thread Darren J Moffat
On 25/02/2010 15:44, Chad wrote: I'm looking to migrate a pool from using multiple smaller LUNs to one larger LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know how to do this? It needs to be non disruptive. You can't do that just now, this needs device removal

Re: [zfs-discuss] raidz2 array FAULTED with only 1 drive down

2010-02-25 Thread Scott Meilicke
You might have to force the import with -f. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS replace - many to one

2010-02-25 Thread Casper . Dik
I'm looking to migrate a pool from using multiple smaller LUNs to one larger LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know how to do this? It needs to be non disruptive. Depends on the zpool's layout and the source of the old and the new files; you can only

Re: [zfs-discuss] ZFS replace - many to one

2010-02-25 Thread Giovanni Tirloni
On Thu, Feb 25, 2010 at 12:44 PM, Chad chad.har...@allstate.com wrote: I'm looking to migrate a pool from using multiple smaller LUNs to one larger LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know how to do this? It needs to be non disruptive. As others have

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-25 Thread David Dyer-Bennet
On Thu, February 25, 2010 08:25, tomwaters wrote: So I rebooted and selected 111b and I no longer have the issue. Interestingly, the rpool is still in place..as it should be. So I have now set this 111b as my default BE ...and removed /dev from the update package list using ... $pfexec pkg

Re: [zfs-discuss] Moving dataset to another zpool but same mount?

2010-02-25 Thread Mark J Musante
On Wed, 24 Feb 2010, Gregory Gee wrote: files files/home files/mail files/VM I want to move the files/VM to another zpool, but keep the same mount point. What would be the right steps to create the new zpool, move the data and mount in the same spot? Create the new pool, take a snapshot

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-02-25 Thread Alastair Neil
I don't think I have seen this addressed in the follow-ups to your message. One issue we have is with deploying large numbers of files systems per pool - not necessarily large numbers of disk. There are major scaling issues with the sharing of large numbers of file systems, in my configuration I

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Cindy Swearingen
Ray, Log removal integrated into build 125, so yes, if you upgraded to at least OpenSolaris build 125 you could fix this problem. See the syntax below on my b133 system. In this particular case, importing the pool from b125 or later media and attempting to remove the log device could not fix

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Ray Van Dolson
Well, it doesn't seem like this is possible -- I was hoping there was some hacky way to do it via zdb or something. Sun support pointed me to a document[1] that leads me to believe this might have worked on OpenSolaris. Anyone out there in Sun-land care to comment? To recap, I accidentally

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-25 Thread Nick
One other question - I'm seeing the same sort of behavior when I try to do something like zfs set sharenfs=off storage/fs - is there a reason that turning off NFS sharing should halt I/O? -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Ray Van Dolson
On Thu, Feb 25, 2010 at 11:55:35AM -0800, Cindy Swearingen wrote: Ray, Log removal integrated into build 125, so yes, if you upgraded to at least OpenSolaris build 125 you could fix this problem. See the syntax below on my b133 system. In this particular case, importing the pool from b125

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Bryan Allen
+-- | On 2010-02-25 12:05:03, Ray Van Dolson wrote: | | Thanks Cindy. I need to stay on Solaris 10 for the time being, so I'm | guessing I'd have to Live boot into an OpenSolaris build, fix my pool | then hope it

[zfs-discuss] Who is using ZFS ACL's in production?

2010-02-25 Thread Paul B. Henson
I've been surveying various forums looking for other places using ZFS ACL's in production to compare notes and see how if at all they've handled some of the issues we've found deploying them. So far, I haven't found anybody using them in any substantial way, let alone trying to leverage them to

Re: [zfs-discuss] Recommendations required for home file server config

2010-02-25 Thread Daniel Carosone
On Wed, Feb 24, 2010 at 10:57:08AM +, li...@di.cx wrote: 2 x SuperMicro AOC-SAT2-MV8 SATA controllers (so 16 ports in total, plus 6 on the motherboard) What about case space for the disks? Disks: 3x40GB rpool mirror and spare on shelf. 3 way mirror if you really want and have the

Re: [zfs-discuss] How to know the recordsize of a file

2010-02-25 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 02/24/2010 11:42 PM, Robert Milkowski wrote: mi...@r600:~# ls -li /bin/bash 1713998 -r-xr-xr-x 1 root bin 799040 2009-10-30 00:41 /bin/bash mi...@r600:~# zdb -v rpool/ROOT/osol-916 1713998 Dataset rpool/ROOT/osol-916 [ZPL], ID 302, cr_txg

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-02-25 Thread Bob Friesenhahn
On Thu, 25 Feb 2010, Alastair Neil wrote: I don't think I have seen this addressed in the follow-ups to your message.  One issue we have is with deploying large numbers of files systems per pool - not necessarily large numbers of disk.  There are major scaling issues with the sharing of large

Re: [zfs-discuss] application writes are blocked near the end of spa_sync

2010-02-25 Thread Bob Friesenhahn
On Thu, 25 Feb 2010, Shane Cox wrote: I'm new to ZFS and looking for some assistance with a performance problem: At the interval of zfs_txg_timeout (I'm using the default of 30), I observe 100-200ms pauses in my application.  Based on my application log files, it appears that the write()

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-02-25 Thread Alastair Neil
I do not know and I don't think anyone would deploy a system in that way with UFS. This is the model that is imposed in order to take full advantage of zfs advanced features such as snapshots, encryption and compression and I know many universities in particular are eager to adopt it for just

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-02-25 Thread Bob Friesenhahn
On Thu, 25 Feb 2010, Alastair Neil wrote: I do not know and I don't think anyone would deploy a system in that way with UFS.  This is the model that is imposed in order to take full advantage of zfs advanced features such as snapshots, encryption and compression and I know many universities

Re: [zfs-discuss] Moving dataset to another zpool but same mount?

2010-02-25 Thread Gregory Gee
So just to verify, from what you said and searching based on what you said, the following is the commands I would use? # zpool create newpool mirror c8d0 c9d0 # zfs create newpool/VM # zfs snapshot files/v...@beforemigration # zfs send files/v...@beforemigration | zfs receive newpool/VM # zfs

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-25 Thread Paul B. Henson
On Thu, 25 Feb 2010, Marion Hakanson wrote: It's not easy to get them right, and usually the hardest task is in figuring out what the users want, so we don't use them unless the users' needs cannot be met using traditional Unix/POSIX permissions. We've got a web GUI that hides the complexity

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-25 Thread David Dyer-Bennet
On Thu, 25 Feb 2010, Marion Hakanson wrote: It's not easy to get them right, and usually the hardest task is in figuring out what the users want, so we don't use them unless the users' needs cannot be met using traditional Unix/POSIX permissions. Yeah, I've had nothing but horror from