Re: [zfs-discuss] Enabling compression/encryption on a populated filesystem

2006-07-17 Thread Darren J Moffat
Jeff Victor wrote: Why? Is the 'data is encrypted' flag only stored in filesystem metadata, or is that flag stored in each data block? Like compression and which checksum algorithm it will be stored in every dmu object. If the latter is true, it would be possible (though potentially

Re: [zfs-discuss] Enabling compression/encryption on a populated filesystem

2006-07-17 Thread Darren J Moffat
Bill Sommerfeld wrote: On Fri, 2006-07-14 at 07:03, Darren J Moffat wrote: The current plan is that encryption must be turned on when the file system is created and can't be turned on later. This means that the zfs-crypto work depends on the RFE to set properties at file system creation

[zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Mikael Kjerrman
Hi, so it happened... I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole pool became unavailable after apparently loosing a diskdrive. (The drive is seemingly ok as far as I can tell from other commands) --- bootlog --- Jul 17 09:57:38 expprd fmd: [ID 441519

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Jeff Bonwick
I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole pool became unavailable after apparently loosing a diskdrive. [...] NAMESTATE READ WRITE CKSUM dataUNAVAIL 0 0 0 insufficient replicas c1t0d0ONLINE

[zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Jonathan Wheeler
Hi All, I've just built an 8 disk zfs storage box, and I'm in the testing phase before I put it into production. I've run into some unusual results, and I was hoping the community could offer some suggestions. I've bascially made the switch to Solaris on the promises of ZFS alone (yes I'm that

Re: [zfs-discuss] Re: zvol Performance

2006-07-17 Thread Neil Perrin
This is change request: 6428639 large writes to zvol synchs too much, better cut down a little which I have a fix for, but it hasn't been put back. Neil. Jürgen Keil wrote On 07/17/06 04:18,: Further testing revealed that it wasn't an iSCSI performance issue but a zvol issue. Testing on a

Re: [zfs-discuss] Large device support

2006-07-17 Thread Robert Milkowski
Hello J.P., Monday, July 17, 2006, 2:15:56 PM, you wrote: JPK Possibly not the right list, but the only appropriate one I knew about. JPK I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device JPK hanging off it, attatched by fibre. JPK Solaris refuses to see this device

Re: [zfs-discuss] Enabling compression/encryption on a populated filesystem

2006-07-17 Thread Luke Scharf
Darren J Moffat wrote: Buth the reason thing is how do you tell the admin "its done now the filesystem is safe". With compression you don't generally care if some old stuff didn't compress (and with the current implementation it has to compress a certain amount or it gets written uncompressed

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
Well if in fact sd/ssd with EFI labels still have limit to 2TB than create SMI label with one slice representing whole disk and then put zfs on that slice. Eventually manually turn on write cache then. How do you suggest that I create a slice representing the whole disk? format (with or without

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
Well if in fact sd/ssd with EFI labels still have limit to 2TB than create SMI label with one slice representing whole disk and then put zfs on that slice. Eventually manually turn on write cache then. Well, in fact it turned out that the firmware on the device needed upgrading to support the

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Al Hopper
On Mon, 17 Jul 2006, Darren J Moffat wrote: Jeff Bonwick wrote zpool create data unreplicated A B C The extra typing would be annoying, but would make it almost impossible to get the wrong behavior by accident. I think that is a very good idea from a usability view point. It is

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
On Mon, 17 Jul 2006, Cindy Swearingen wrote: Hi Julian, Can you send me the documentation pointer that says 2 TB isn't supported on the Solaris 10 6/06 release? As per my original post: http://docs.sun.com/app/docs/doc/817-5093/6mkisoq1k?a=view#disksconcepts-17 This doesn't say which

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Richard Elling
I too have seen this recently, due to a partially failed drive. When I physically removed the drive, ZFS figured everything out and I was back up and running. Alas, I have been unable to recreate. There is a bug lurking here, if someone has a more clever way to test, we might be able to nail it

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Al Hopper
On Mon, 17 Jul 2006, Roch wrote: Sorry to plug my own blog but have you had a look at these ? http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz) http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs Also, my thinking is that raid-z is probably more

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Bart Smaalders
Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote: Mark Shellenbaum wrote: PERMISSION GRANTING zfs allow -c ability[,ability...] dataset -c Create means that the permission will be granted (Locally) to the creator on any newly-created descendant

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
Bart Smaalders wrote: Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote: Mark Shellenbaum wrote: PERMISSION GRANTING zfs allow -c ability[,ability...] dataset -c Create means that the permission will be granted (Locally) to the creator on any

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Matthew Ahrens
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote: So as administrator what do I need to do to set /export/home up for users to be able to create their own snapshots, create dependent filesystems (but still mounted underneath their /export/home/usrname)? In other words, is

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Bart Smaalders
Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote: So as administrator what do I need to do to set /export/home up for users to be able to create their own snapshots, create dependent filesystems (but still mounted underneath their /export/home/usrname)? In

Re: [zfs-discuss] Large device support

2006-07-17 Thread Torrey McMahon
Or if you have the right patches ... http://blogs.sun.com/roller/page/torrey?entry=really_big_luns Cindy Swearingen wrote: Hi Julian, Can you send me the documentation pointer that says 2 TB isn't supported on the Solaris 10 6/06 release? The 2 TB limit was lifted in the Solaris 10 1/06

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Nicolas Williams
On Mon, Jul 17, 2006 at 10:11:35AM -0700, Matthew Ahrens wrote: I want root to create a new filesystem for a new user under the /export/home filesystem, but then have that user get the right privs via inheritance rather than requiring root to run a set of zfs commands. In that case, how

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread James Dickens
On 7/17/06, Mark Shellenbaum [EMAIL PROTECTED] wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs allow [-l] [-d] everyone|user|group

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread James Dickens
On 7/17/06, Jonathan Wheeler [EMAIL PROTECTED] wrote: Hi All, I've just built an 8 disk zfs storage box, and I'm in the testing phase before I put it into production. I've run into some unusual results, and I was hoping the community could offer some suggestions. I've bascially made the

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
Glenn Skinner wrote: The following is a nit-level comment, so I've directed it onl;y to you, rather than to the entire list. Date: Mon, 17 Jul 2006 09:57:35 -0600 From: Mark Shellenbaum [EMAIL PROTECTED] Subject: [zfs-discuss] Proposal: delegated administration The following is

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
James Dickens wrote: On 7/17/06, Mark Shellenbaum [EMAIL PROTECTED] wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs allow [-l] [-d]

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Gregory Shaw
To maximize the throughput, I'd go with 8 5-disk raid-z{2} luns.   Using that configuration, a full-width stripe write should be a single operation for each controller.In production, the application needs would probably dictate the resulting disk layout.  If the application doesn't need tons of

Re[2]: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
I take it you already have solved the problem. Yes, my problems went away once my device supported the extended SCSI instruction set. Julian -- Julian King Computer Officer, University of Cambridge, Unix Support ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-17 Thread Matthew Ahrens
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote: Add an option to zpool(1M) to dump the pool config as well as the configuration of the volumes within it to an XML file. This file could then be sucked in to zpool at a later date to recreate/ replicate the pool and its volume

[zfs-discuss] Fun with ZFS and iscsi volumes

2006-07-17 Thread Jason Hoffman
Hi Everyone, I thought I'd share some benchmarking and playing around that we had done with making zpools from disks that were iscsi volumes. The numbers are representative of 6 benchmarking rounds per. The interesting finding at least for us was the filebench varmail (50:50

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Jim Mauro
I agree with Greg - For ZFS, I'd recommend a larger number of raidz luns, with a smaller number of disks per LUN, up to 6 disks per raidz lun. This will more closely align with performance best practices, so it would be cool to find common ground in terms of a sweet-spot for performance and

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Nathan Kroenert
Jeff - That sounds like a great idea... Another idea might to be have a zpool create announce the 'availability' of any given configuration, and output the Single points of failure. # zpool create mypool a b c NOTICE: This pool has no redundancy. Without hardware

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Eric Schrock
On Tue, Jul 18, 2006 at 10:10:33AM +1000, Nathan Kroenert wrote: Jeff - That sounds like a great idea... Another idea might to be have a zpool create announce the 'availability' of any given configuration, and output the Single points of failure. # zpool create mypool a b c

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Richard Elling
[stirring the pot a little...] Jim Mauro wrote: I agree with Greg - For ZFS, I'd recommend a larger number of raidz luns, with a smaller number of disks per LUN, up to 6 disks per raidz lun. For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z or RAID-Z2. For 3-5 disks,

Re: [zfs-discuss] metadata inconsistency?

2006-07-17 Thread Matthew Ahrens
On Thu, Jul 06, 2006 at 12:46:57AM -0700, Patrick Mauritz wrote: Hi, after some unscheduled reboots (to put it lightly), I've got an interesting setup on my notebook's zfs partition: setup: simple zpool, no raid or mirror, a couple of zfs partitions, one zvol for swap. /foo is one such