Re: [zfs-discuss] RFE filesystem ownership
Roland Mainz wrote: Darren J Moffat wrote: James Dickens wrote: I think ZFS should add the concept of ownership to a ZFS filesystem, so if i create a filesystem for joe, he should be able to use his space how ever he see's fit, if he wants to turn on compression or take 5000 snapshots its his filesystem, let him. If he wants to destroy snapshots, he created them it should be allowed, but he should not be allowed to do the same with carol's filesystem. The current filesystem management is not fine grained enough to deal with this. Of course if we don't assign an owner the filesystem should perform much like it does today. Yes we do need something like this. This is already covered by the following CRs 6280676, 6421209. That could be done if zfs would be based on ksh93... you could simply run it as profile shell (pfksh93) and make a profile for that user+ZFS filesystem... We already have an RBAC profile ZFS File System Management but that allows the user given that profile to manage ALL ZFS file systems. What this is really about is having the zfs kernel module check an ACL on the data set to determine if the user can create/snapshot/clone/destroy/ etc, also certain properties may need to be locked. I've given a lot of thought to this as has Mark Shellenbaum and trust me RBAC is not the answer here and ksh93 based zfs is not going to help one way or another since this is all kernel based policy. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Misc questions
Matthew Ahrens wrote: On Tue, May 23, 2006 at 02:34:30PM -0700, Jeff Victor wrote: * When you share a ZFS fs via NFS, what happens to files and filesystems that exceed the limits of NFS? What limits do you have in mind? I'm not an NFS expert, but I think that NFSv4 (and probably v3) supports 64-bit file sizes, so there would be no limit mismatch there. yes v3 supports largefile(5) sized files. * Is there a recommendation or some guidelines to help answer the question how full should a pool be before deciding it's time add disk space to a pool? I'm not sure, but I'd guess around 90%. * Migrating pre-ZFS backups to ZFS backups: is there a better method than restore the old backup into a ZFS fs, then back it up using zfs send? No. I'd say that it depends for the then back it up part. It depends on what backup software you use today. If you use something like Legato Networker or Veritas Netbackup or anything else that just walks the file system using POSIX APIs, then keep using that. Be aware though that they may not pickup on ZFS ACLs yet and they won't save the zfs data set config, ie compression, checksum etc. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RFE filesystem ownership
Darren J Moffat wrote: Roland Mainz wrote: Darren J Moffat wrote: James Dickens wrote: I think ZFS should add the concept of ownership to a ZFS filesystem, so if i create a filesystem for joe, he should be able to use his space how ever he see's fit, if he wants to turn on compression or take 5000 snapshots its his filesystem, let him. If he wants to destroy snapshots, he created them it should be allowed, but he should not be allowed to do the same with carol's filesystem. The current filesystem management is not fine grained enough to deal with this. Of course if we don't assign an owner the filesystem should perform much like it does today. Yes we do need something like this. This is already covered by the following CRs 6280676, 6421209. That could be done if zfs would be based on ksh93... you could simply run it as profile shell (pfksh93) and make a profile for that user+ZFS filesystem... We already have an RBAC profile ZFS File System Management but that allows the user given that profile to manage ALL ZFS file systems. What this is really about is having the zfs kernel module check an ACL on the data set to determine if the user can create/snapshot/clone/destroy/ etc, also certain properties may need to be locked. Could it be worthwhile imposing limits, in addition to locking? For example, if I gave you the right to snapshot ~darrenm, I might want to only allow you 10 snapshots. Is that a worthwhile restriction or is it better to just let quotas take care of that? At issue here is the potential for (again :) zfs to spam df output through potentially accidental excessive use of snapshots by a user with a buggy cron job. Or maybe they have potential to be malicious through this avenue too? The point here is not to deny the action but to give it bounds. Darren ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RFE filesystem ownership
Mark Shellenbaum [EMAIL PROTECTED] writes: Yes we do need something like this. This is already covered by the following CRs 6280676, 6421209. These RFE's are currently being investigated. The basic idea is that an adminstrator will be allowed to grant specific users/groups to perform various zfs adminstrative tasks, such as create, destroy, clone, changing properties and so on. After the zfs team is in agreement as to what the interfaces should be, I will forward it to zfs-discuss for further feedback. In addition to this, what I think will become necessary is a way to perform this sort of end-user zfs administration securely over the network (maybe with an RPC service secured with RPCSEC_GSS?): I don't want to grant every single student login to the fileservers to admin their zfs filesystems ;-( Rainer -- - Rainer Orth, Faculty of Technology, Bielefeld University ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RFE filesystem ownership
Rainer Orth wrote: Mark Shellenbaum [EMAIL PROTECTED] writes: Yes we do need something like this. This is already covered by the following CRs 6280676, 6421209. These RFE's are currently being investigated. The basic idea is that an adminstrator will be allowed to grant specific users/groups to perform various zfs adminstrative tasks, such as create, destroy, clone, changing properties and so on. After the zfs team is in agreement as to what the interfaces should be, I will forward it to zfs-discuss for further feedback. In addition to this, what I think will become necessary is a way to perform this sort of end-user zfs administration securely over the network (maybe with an RPC service secured with RPCSEC_GSS?): I don't want to grant every single student login to the fileservers to admin their zfs filesystems ;-( I'm assuming you mean using zfs(1) but having a remote mode where you indicate the name of the server and pool. There is, sadly, the problem of mandating RPCSEC_GSS because so many people don't have the Kerberos infrastructure setup to use it. Personally I'd be more than happy to say that if you want to use this you must use RPCSEC_GSS but that might not go down well with everyone. I do actually like your suggestion of a zfs command that talks over RPCSEC_GSS and it would work great for me since I do have Kerberos creds on the client and servers I use! However it would be really really nice if we didn't need a special command on the client side. Particularly since it might not be a Solaris machine on the client. As it happens we have a client interface that doesn't require you run Solaris on the client side or have Kerberos deployed; the web based ZFS gui that is secured by SSL. The other option is to allow users to this by doing operations in the special .zfs directory. This should even be possible over NFS or CIFS. For example creation, rename and delete of snapshots using normal file system tools, in .zfs/snapshot. mv seems to be able to rename a snapshot. Maybe we could have cp on a snapshot mean clone eg: $ cd .zfs/snapshot $ mv foo bar $ cp bar baz $ rm may Would rename the snapshot called foo to the snapshot called bar It would then create a clone called baz based on the snapshot bar. Finally removing the snapshot called may. Given that the .zfs directory is special we might be able to invent additional things for doing the other operations. The harder part is setting the options like share/checksum/compression etc. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] user undo
On Wed, 2006-05-24 at 12:22, James Dickens wrote: how about changing the name of the file to uid or username-filename this atleast gets you the ability to let each user the ability to delete there own file, shouldn't be much work. Another possible enhancement would be adding anything field in stat(stat) in the files name after its deleted. This would be set per filesystem. mod, uid, username(the code should do the conversion), gid, size, mtime and just parse a format string like $mtime-$name. A number of (generally older) systems have had the concept of numbered file versions. (I recall seeing this during casual use of ITS, TOPS-20, and VMS; GNU Emacs and its derivatives emulate this via the use of .~NN~ backup copies, but this pollutes the directory namespace). Adding a version/generation number to the filename in the deleted directory would allow multiple versions to coexist. It might also make sense to populate the deleted directory with an older version when file contents are deleted via an open(..., ...|O_TRUNC) or when a file is deleted via rename. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New features in Solaris Express 05/06
Hello Phil, Wednesday, May 24, 2006, 7:28:51 PM, you wrote: PC Will the ability to import a destroyed ZFS pool and the fsstat PC command that's part of the latest Solaris Express release (B38) PC make it into Solaris 10 Update-2 when it's released in PC June/July??? Also has any decision been made yet what build PC Update-2 will be taken from to give an idea of what can be expected for ZFS??? AFAIK importing destroyd pool will be in U2. And U2 is not just any given snv build - it's a separate branch, so only backported functionalities are included. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS and HSM
I said I had several questions to start threads on What about ZFS and various HSM solutions? Do any of them already work with ZFS? Are any going to? It seems like HSM solutions that access things at a file level would have little trouble integrating with ZFS. But ones that work at a block level would have a harder time. On that same thread, what about support for DMAPI within ZFS? --SCott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Snapshot / Rollback at a file level
It doesn't sound like they want to use filesystem-mgmt tools, they want to use per-file tools. Wouldn't pax (or cpio) be a better tool to do these things? Scott Dickson wrote: A customer asked today about the ability to snapshot and rollback individual files. They have an environment where users might generate lots of files, but want only a portion of them to be included in a snapshot. Moreover, typically when they recover a file, they only want one or two of them. The idea of copying them out of a fs snapshot wasn't terribly appealing to them since they tend to have pretty large files. It would be nice to just reattach that file. Is this reasonable? -- -- Jeff VICTOR Sun Microsystemsjeff.victor @ sun.com OS AmbassadorSr. Technical Specialist Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and HSM
On Wed, May 24, 2006 at 03:43:54PM -0400, Scott Dickson wrote: I said I had several questions to start threads on What about ZFS and various HSM solutions? Do any of them already work with ZFS? Are any going to? It seems like HSM solutions that access things at a file level would have little trouble integrating with ZFS. But ones that work at a block level would have a harder time. Sun is working on getting SAM (a HSM which is currently wedded to QFS) working with ZFS. --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The 12.5% compression rule
On Thu, May 11, 2006 at 12:34:45PM +0100, Darren J Moffat wrote: Where does the 12.5% compression rule in zio_compress_data() come from ? Given that this is in the generic function for all compression algorithms rather than in the implementation of lzjb I wonder where the number comes from ? Where all hard-coded numbers come from: our ass. Seriously, though, the idea was that you don't want to waste time with decompression if you didn't get anything out of compression. And if I remember right, the compression aborts if it finds itself taking up more than 87.5%. But, you're right - this should be, at the very least, a tunable. Preferably, a per-compression algorithm tunable. Would you mind filing an RFE/bug as you see fit? --Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Sequentiality direct access to a file
Hello Scott, Wednesday, May 24, 2006, 9:42:06 PM, you wrote: SD How does (or does) ZFS maintain sequentiality of the blocks of a file. SD If I mkfile on a clean UFS, I likely will get contiguous blocks for my SD file, right? A customer I talked to recently has a desire to access SD large volumes of sequential data. They were concerned about maintaining SD this file as a string of sequential blocks on disk (for performance SD reasons) as they update it. SD This brought up the second part of the question - how does ZFS deal with SD things that look and feel like directio? When processes update blocks SD in place, like a database might do, are new blocks allocated, or the SD existing blocks left alone? If they are left in place, doesn't that SD screw around with the transactional nature of ZFS? SD Is this question at all clear? updated blocks will be actually written to new place - so if you update random blocks in a file then reading this file sequentially will not be that sequential. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss