Re: [zfs-discuss] feature proposal

2009-07-31 Thread dick hoogendijk
On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:

 On the read-write front: wouldn't it be cool to be able to snapshot
 things by:
 $ mkdir .zfs/snapshot/snap-name

I've followed this thread but I fail to see the advantages of this. I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then zfs create whate...@now?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread Andrew Gabriel




dick hoogendijk wrote:

  On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:

  
  
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/snap-name

  
  
I've followed this thread but I fail to see the advantages of this. I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then "zfs create whate...@now?"
  


Many more systems have a mkdir (or equivalent) command than have a zfs
command.

-- 
Andrew



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread Gaëtan Lehmann


Le 31 juil. 09 à 10:24, dick hoogendijk a écrit :


On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:


On the read-write front: wouldn't it be cool to be able to snapshot
things by:
   $ mkdir .zfs/snapshot/snap-name


I've followed this thread but I fail to see the advantages of this. I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then zfs create whate...@now?



Because it can be done on any host mounting this file system through a  
network protocol like NFS or CIFS.

A nice feature for a NAS.

Gaëtan


--
Gaëtan Lehmann
Biologie du Développement et de la Reproduction
INRA de Jouy-en-Josas (France)
tel: +33 1 34 65 29 66fax: 01 34 65 29 09
http://voxel.jouy.inra.fr  http://www.itk.org
http://www.mandriva.org  http://www.bepo.fr



PGP.sig
Description: Ceci est une signature électronique PGP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread Tristan Ball
Because it means you can create zfs snapshots from a non solaris/non 
local client...


Like a linux nfs client, or a windows cifs client.

T

dick hoogendijk wrote:

On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:

  

On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/snap-name



I've followed this thread but I fail to see the advantages of this. I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then zfs create whate...@now?

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread dick hoogendijk
On Fri, 31 Jul 2009 18:38:16 +1000
Tristan Ball tristan.b...@leica-microsystems.com wrote:

 Because it means you can create zfs snapshots from a non solaris/non 
 local client...
 
 Like a linux nfs client, or a windows cifs client.

So if I want a snapshot of i.e. rpool/export/home/dick I can do a zfs
snapshot rpool/export/home/dick, but what is the exact syntax for the
same snapshot using this other method?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread Kyle McDonald

dick hoogendijk wrote:

On Fri, 31 Jul 2009 18:38:16 +1000
Tristan Ball tristan.b...@leica-microsystems.com wrote:

  
Because it means you can create zfs snapshots from a non solaris/non 
local client...


Like a linux nfs client, or a windows cifs client.



So if I want a snapshot of i.e. rpool/export/home/dick I can do a zfs
snapshot rpool/export/home/dick, 

But your command requires that it be run on the NFS/CIFS *server* directly.

The 'mkdir' command version can be run on the server or on any NFS or 
CIFS client.


It's possible (likely even) that regular users would not be allowed to 
login to server machines, but if given the right access, they can still 
use the mkdir  version to create their own snapshots from a client.

but what is the exact syntax for the
same snapshot using this other method?
  
As I understand it, if rpool/export/home/dick is mounted on /home/dick, 
then the syntax would be


cd /home/dick/.zfs/snapshot
mkdir mysnapshot

 -Kyle




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends 
explicitly but have:

.zfs/send/snap name
.zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically? 


On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/snap-name


That already works if you have the snapshot delegation as that user.  It 
even works over NFS and CIFS.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Cyril Plisko
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:
 Roman V Shaposhnik wrote:

 On the read-only front: wouldn't it be cool to *not* run zfs sends
 explicitly but have:
    .zfs/send/snap name
    .zfs/sendr/from-snap-name-to-snap-name
 give you the same data automagically?
 On the read-write front: wouldn't it be cool to be able to snapshot
 things by:
    $ mkdir .zfs/snapshot/snap-name

 That already works if you have the snapshot delegation as that user.  It
 even works over NFS and CIFS.

WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

Cyril Plisko wrote:

On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:

Roman V Shaposhnik wrote:

On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
   .zfs/send/snap name
   .zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically?
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
   $ mkdir .zfs/snapshot/snap-name

That already works if you have the snapshot delegation as that user.  It
even works over NFS and CIFS.


WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


Initially introduced in:

changeset:   4543:12bb2876a62e
user:marks
date:Tue Jun 26 07:44:24 2007 -0700
description:
PSARC/2006/465 ZFS Delegated Administration
PSARC/2006/577 zpool property to disable delegation
PSARC/2006/625 Enhancements to zpool history
PSARC/2007/228 ZFS delegation amendments
PSARC/2007/295 ZFS Delegated Administration Addendum
6280676 restore owner property
6349470 investigate non-root restore/backup
6572465 'zpool set bootfs=...' records history as 'zfs set 
bootfs=...'



Bug fix for CIFS clients in:

changeset:   6803:468e12a53baf
user:marks
date:Fri May 16 08:55:36 2008 -0700
description:
6700649 zfs_ctldir snapshot creation issues with CIFS clients



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Ross
Whoah!  Seriously?  When did that get added and how did I miss it?

That is absolutely superb!  And an even stronger case for mkdir creating 
filesystems.  A filesystem per user that they can snapshot at will o_0

Ok, it'll need some automated pruning of old snapshots, but even so, that has 
some serious potential!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread James Lever

Hi Darryn,

On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:

That already works if you have the snapshot delegation as that  
user.  It even works over NFS and CIFS.


Can you give us an example of how to correctly get this working?

I've read through the manpage but have not managed to get the correct  
set of permissions for it to work as a normal user (so far).


I'm sure others here would be keen to see a correct recipe to allow  
user managed snapshots remotely via mkdir/rmdir.


cheers,
James

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

James Lever wrote:

Hi Darryn,

On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:

That already works if you have the snapshot delegation as that user.  
It even works over NFS and CIFS.


Can you give us an example of how to correctly get this working?


On the host that has the ZFS datasets (ie the NFS/CIFS server) you need 
to give the user the delegation to create snapshots and to mount them:


# zfs allow -u james snapshot,mount,destroy tank/home/james

If you don't give the destroy delegation users won't be able to remove 
the snapshots the create.


Now on the client you should be able to:

cd .zfs/snapshot
mkdir newsnap

I've read through the manpage but have not managed to get the correct 
set of permissions for it to work as a normal user (so far).


What did you try ?
What release of OpenSolaris are you running ?

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Richard Elling

On Jul 30, 2009, at 2:15 AM, Cyril Plisko wrote:


On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:

Roman V Shaposhnik wrote:


On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
   .zfs/send/snap name
   .zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically?
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
   $ mkdir .zfs/snapshot/snap-name


That already works if you have the snapshot delegation as that  
user.  It

even works over NFS and CIFS.


WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


Most folks don't RTFM :-)  Cindy does an excellent job of keeping track
of new features and procedures in the ZFS Administration Guide. This
one is example 9-6 under the Using ZFS Delegated Administration
section.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

James Lever wrote:


On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:

On the host that has the ZFS datasets (ie the NFS/CIFS server) you 
need to give the user the delegation to create snapshots and to mount 
them:


# zfs allow -u james snapshot,mount,destroy tank/home/james


Ahh, it was the lack of mount that caught me!  Thanks Darren.


It is documented in the zfs(1M) man page:

 Permissions are generally the ability to use a  ZFS  subcom-
 mand or change a ZFS property. The following permissions are
 available:

...
   snapshot subcommand   Must also have the 'mount' ability.


I've read through the manpage but have not managed to get the correct 
set of permissions for it to work as a normal user (so far).


What did you try ?
What release of OpenSolaris are you running ?


snv 118.  I blame being tired and not trying enough options!  I was 
trying to do it with just snapshot and destroy expecting something like 
a snapshot didn't need to be mounted for some reason.


Thanks for the clarification.  Next time I think I'll also consult the 
administration guide as well as the manpage though I guess an explicit 
example for the snapshot delegation wouldn't go astray in the manpage.


Like the one that is already there ?

Example 18 Delegating ZFS Administration  Permissions  on  a
 ZFS Dataset


 The following example shows how to set permissions  so  that
 user  cindys  can create, destroy, mount, and take snapshots

   # # zfs allow cindys create,destroy,mount,snapshot tank/cindys


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon

What do you think about the following feature?

Subdirectory is automatically a new filesystem property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.

-- 
Andriy Gapon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


Subdirectory is automatically a new filesystem property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.


It's a nice idea, but zfs filesystems consume memory and have overhead. 
This would make it trivial for a non-root user (assuming they have 
permissions) to crush the host under the weight of .. mkdir.


$ mkdir -p waste/resources/now/waste/resources/now/waste/resources/now

(now make that much longer and put it in a loop)

Also, will rmdir call zfs destroy? Snapshots interacting with that could 
be somewhat unpredictable. What about rm -rf?


It'd either require major surgery to userland tools, including every 
single program that might want to create a directory, or major surgery to 
the kernel. The former is unworkable, the latter .. scary.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat

Andriy Gapon wrote:

What do you think about the following feature?

Subdirectory is automatically a new filesystem property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.


This has been brought up before and I thought there was an open CR for 
it but I can't find it.



Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.


A good use case.  Another good one is shared build machine which is 
similar to the home dir case.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread David Magda
On Wed, July 29, 2009 10:24, Andre van Eyssen wrote:

 It'd either require major surgery to userland tools, including every
 single program that might want to create a directory, or major surgery to
 the kernel. The former is unworkable, the latter .. scary.

How about: add a flag (-Z?) to useradd(1M) and usermod(1M) so that if
base_dir is on ZFS, then the user's homedir is created as a new file
system (assuming -m).


Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat

David Magda wrote:

On Wed, July 29, 2009 10:24, Andre van Eyssen wrote:


It'd either require major surgery to userland tools, including every
single program that might want to create a directory, or major surgery to
the kernel. The former is unworkable, the latter .. scary.


How about: add a flag (-Z?) to useradd(1M) and usermod(1M) so that if
base_dir is on ZFS, then the user's homedir is created as a new file
system (assuming -m).


Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?


st_fstype field of struct stat.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon
on 29/07/2009 17:24 Andre van Eyssen said the following:
 On Wed, 29 Jul 2009, Andriy Gapon wrote:
 
 Subdirectory is automatically a new filesystem property - an
 administrator turns
 on this magic property of a filesystem, after that every mkdir *in the
 root* of
 that filesystem creates a new filesystem. The new filesystems have
 default/inherited properties except for the magic property which is off.

 Right now I see this as being mostly useful for /home. Main benefit in
 this case
 is that various user administration tools can work unmodified and do
 the right
 thing when an administrator wants a policy of a separate fs per user
 But I am sure that there could be other interesting uses for this.
 
 It's a nice idea, but zfs filesystems consume memory and have overhead.
 This would make it trivial for a non-root user (assuming they have
 permissions) to crush the host under the weight of .. mkdir.

Well, I specifically stated that this property should not be recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should carefully set
permissions to make sure that only trusted entities can create directories 
there.

'rmdir' question requires some thinking, my first reaction is it should do zfs
destroy...


-- 
Andriy Gapon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, David Magda wrote:


Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?


statvfs(2)

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Mark J Musante

On Wed, 29 Jul 2009, David Magda wrote:

Which makes me wonder: is there a programmatic way to determine if a 
path is on ZFS?


Yes, if it's local. Just use df -n $path and it'll spit out the filesystem 
type.  If it's mounted over NFS, it'll just say something like nfs or 
autofs, though.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


Well, I specifically stated that this property should not be recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should carefully set
permissions to make sure that only trusted entities can create directories 
there.


Even limited to the root of a filesystem, it still gives a user the 
ability to consume resources rapidly. While I appreciate the fact that it 
would be restricted by permissions, I can think of a number of usage cases 
where it could suddenly tank a host. One use that might pop up, for 
example, would be cache spools - which often contain *many* directories. 
One runaway and kaboom.


We generally use hosts now with plenty of RAM and the per-filesystem 
overhead for ZFS doesn't cause much concern. However, on a scratch box, 
try creating a big stack of filesystems - you can end up with a pool that 
consumes so much memory you can't import it!



'rmdir' question requires some thinking, my first reaction is it should do zfs
destroy...


.. which will fail if there's a snapshot, for example. The problem seems 
to be reasonably complex - compounded by the fact that many programs that 
create or remove directories do so directly - not by calling externals 
that would be ZFS aware.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Mark J Musante wrote:


Yes, if it's local. Just use df -n $path and it'll spit out the filesystem 
type.  If it's mounted over NFS, it'll just say something like nfs or autofs, 
though.


$ df -n /opt
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d24  33563061 11252547 2197488434%/opt
$ df -n /sata750
Filesystemkbytesused   avail capacity  Mounted on
sata750  2873622528  77 322671575 1%/sata750

Not giving the filesystem type. It's easy to spot the zfs with the lack of 
recognisable device path, though.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Kyle McDonald

Andriy Gapon wrote:

What do you think about the following feature?

Subdirectory is automatically a new filesystem property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.

  
But now that quotas are working properly, Why would you want to continue 
the hack of 1 FS per user?


I'm seriously curious here. In my view it's just more work. A more 
cluttered zfs list, and share output. A lot less straight forward and 
simple too.

Why bother? What's the benefit?

-Kyle


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat

Andre van Eyssen wrote:

On Wed, 29 Jul 2009, Andriy Gapon wrote:

Well, I specifically stated that this property should not be 
recursive, i.e. it

should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should 
carefully set
permissions to make sure that only trusted entities can create 
directories there.


Even limited to the root of a filesystem, it still gives a user the 
ability to consume resources rapidly. While I appreciate the fact that 
it would be restricted by permissions, I can think of a number of usage 
cases where it could suddenly tank a host. One use that might pop up, 
for example, would be cache spools - which often contain *many* 
directories. One runaway and kaboom.


No worse than any other use case, if you can create datasets you can do 
that anyway.  If you aren't running with restrictive resource controls 
you can tank the host in so many easier ways.  Note that the proposal 
is that this be off by default and has to be something you explicitly 
enable.


We generally use hosts now with plenty of RAM and the per-filesystem 
overhead for ZFS doesn't cause much concern. However, on a scratch box, 
try creating a big stack of filesystems - you can end up with a pool 
that consumes so much memory you can't import it!


'rmdir' question requires some thinking, my first reaction is it 
should do zfs

destroy...


.. which will fail if there's a snapshot, for example. The problem seems 
to be reasonably complex - compounded by the fact that many programs 
that create or remove directories do so directly - not by calling 
externals that would be ZFS aware.


I don't understand how you came to that conclusion.  This wouldn't be 
implemented in /usr/bin/mkdir but in the ZFS implementation of the 
mkdir(2) syscall.



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Darren J Moffat

Kyle McDonald wrote:

Andriy Gapon wrote:

What do you think about the following feature?

Subdirectory is automatically a new filesystem property - an 
administrator turns
on this magic property of a filesystem, after that every mkdir *in the 
root* of

that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in 
this case
is that various user administration tools can work unmodified and do 
the right

thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.

  
But now that quotas are working properly, Why would you want to continue 
the hack of 1 FS per user?


hack ?  Different usage cases!


Why bother? What's the benefit?


The benefit is that users can control their own snapshot policy, they 
can create and destroy their own sub datasets, send and recv them etc.

We can also delegate specific properties to users if we want as well.

This is exactly how I have the builds area setup on our ONNV build 
machines for the Solaris security team.Sure the output of zfs list 
is long - but I don't care about that.


When encryption comes along having a separate filesystem per user is an 
useful deployment case because it means we can deploy with separate keys 
for each user (granted may be less interesting if they only access their 
home dir over NFS/CIFS but still useful).  I have a prototype PAM module
that uses the users login password as the ZFS dataset wrapping key and 
keeps that in sync with the users login password on password change.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Nicolas Williams
On Wed, Jul 29, 2009 at 03:35:06PM +0100, Darren J Moffat wrote:
 Andriy Gapon wrote:
 What do you think about the following feature?
 
 Subdirectory is automatically a new filesystem property - an 
 administrator turns
 on this magic property of a filesystem, after that every mkdir *in the 
 root* of
 that filesystem creates a new filesystem. The new filesystems have
 default/inherited properties except for the magic property which is off.
 
 This has been brought up before and I thought there was an open CR for 
 it but I can't find it.

I'd want this to be something one could set per-directory, and I'd want
it to not be inherittable (or to have control over whether it is
inherittable).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Kyle McDonald

Darren J Moffat wrote:

Kyle McDonald wrote:

Andriy Gapon wrote:

What do you think about the following feature?

Subdirectory is automatically a new filesystem property - an 
administrator turns
on this magic property of a filesystem, after that every mkdir *in 
the root* of

that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is 
off.


Right now I see this as being mostly useful for /home. Main benefit 
in this case
is that various user administration tools can work unmodified and do 
the right

thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.

  
But now that quotas are working properly, Why would you want to 
continue the hack of 1 FS per user?


hack ?  Different usage cases!


Why bother? What's the benefit?


The benefit is that users can control their own snapshot policy, they 
can create and destroy their own sub datasets, send and recv them etc.

We can also delegate specific properties to users if we want as well.

This is exactly how I have the builds area setup on our ONNV build 
machines for the Solaris security team.Sure the output of zfs list 
is long - but I don't care about that.
I can imagine a use for a builds. 1 FS per build - I don't know. But why 
link it to the mkdir? Why not make the build scripts do the zfs create 
out right?


When encryption comes along having a separate filesystem per user is 
an useful deployment case because it means we can deploy with separate 
keys for each user (granted may be less interesting if they only 
access their home dir over NFS/CIFS but still useful).  I have a 
prototype PAM module
that uses the users login password as the ZFS dataset wrapping key and 
keeps that in sync with the users login password on password change.


Encryption is an interesting case. User Snapshots I'd need to think 
about more.

Couldn't the other properties be delegated on directories?

Maybe I'm just getting old. ;) I still think having the zpool not 
automatically include a filesystem, and having ZFS containers was a 
useful concept. And I still use share (and now sharemgr) to manage my 
shares, and not ZFS share. Oh well. :)


 -Kyle



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Ross
I can think of a different feature where this would be useful - storing virtual 
machines.

With an automatic 1fs per folder, each virtual machine would be stored in its 
own filesystem, allowing for rapid snapshots, and instant restores of any 
machine.

One big limitation for me of zfs is that although I can restore an entire 
filesystem in seconds, restoring any individual folder takes much, much longer 
as it's treated as a standard copy.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Michael Schuster

On 29.07.09 07:56, Andre van Eyssen wrote:

On Wed, 29 Jul 2009, Mark J Musante wrote:


Yes, if it's local. Just use df -n $path and it'll spit out the 
filesystem type.  If it's mounted over NFS, it'll just say something 
like nfs or autofs, though.


$ df -n /opt
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d24  33563061 11252547 2197488434%/opt
$ df -n /sata750
Filesystemkbytesused   avail capacity  Mounted on
sata750  2873622528  77 322671575 1%/sata750

Not giving the filesystem type. It's easy to spot the zfs with the lack 
of recognisable device path, though.




which df are you using?

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andriy Gapon
on 29/07/2009 17:52 Andre van Eyssen said the following:
 On Wed, 29 Jul 2009, Andriy Gapon wrote:
 
 Well, I specifically stated that this property should not be
 recursive, i.e. it
 should work only in a root of a filesystem.
 When setting this property on a filesystem an administrator should
 carefully set
 permissions to make sure that only trusted entities can create
 directories there.
 
 Even limited to the root of a filesystem, it still gives a user the
 ability to consume resources rapidly. While I appreciate the fact that
 it would be restricted by permissions, I can think of a number of usage
 cases where it could suddenly tank a host. One use that might pop up,
 for example, would be cache spools - which often contain *many*
 directories. One runaway and kaboom.

Well, the feature would not be on by default.
So careful evaluation and planning should prevent abuses.

 We generally use hosts now with plenty of RAM and the per-filesystem
 overhead for ZFS doesn't cause much concern. However, on a scratch box,
 try creating a big stack of filesystems - you can end up with a pool
 that consumes so much memory you can't import it!
 
 'rmdir' question requires some thinking, my first reaction is it
 should do zfs
 destroy...
 
 .. which will fail if there's a snapshot, for example. The problem seems
 to be reasonably complex - compounded by the fact that many programs
 that create or remove directories do so directly - not by calling
 externals that would be ZFS aware.

Well, snapshots could be destroyed too, nothing stops from doing that.
BTW, I am not proposing to implement this feature in mkdir/rmdir userland 
utility,
I am proposing to implement the feature in ZFS kernel code responsible for
directory creation/removal.

-- 
Andriy Gapon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Roman V Shaposhnik
On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote:
 What do you think about the following feature?
 
 Subdirectory is automatically a new filesystem property - an administrator 
 turns
 on this magic property of a filesystem, after that every mkdir *in the root* 
 of
 that filesystem creates a new filesystem. The new filesystems have
 default/inherited properties except for the magic property which is off.
 
 Right now I see this as being mostly useful for /home. Main benefit in this 
 case
 is that various user administration tools can work unmodified and do the right
 thing when an administrator wants a policy of a separate fs per user
 But I am sure that there could be other interesting uses for this.

This feature request touches upon a very generic observation that my
group made a long time ago: ZFS is a wonderful filesystem, the only
trouble is that (almost) all the cool features have to be asked for
using non-filesystem (POSIX) APIs. Basically everytime you have
to do anything with ZFS you have to do it on a host where ZFS runs.

The sole exception from this rule is .zfs subdirectory that lets you
have access to snapshots without explicit calls to zfs(1M). 

Basically .zfs subdirectory is your POSIX FS way to request two bits
of ZFS functionality. In general, however, we all want more.

On the read-only front: wouldn't it be cool to *not* run zfs sends 
explicitly but have:
.zfs/send/snap name
.zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically? 

On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/snap-name
?

The list goes on...

Thanks,
Roman.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Pawel Jakub Dawidek
On Wed, Jul 29, 2009 at 05:34:53PM -0700, Roman V Shaposhnik wrote:
 On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote:
  What do you think about the following feature?
  
  Subdirectory is automatically a new filesystem property - an 
  administrator turns
  on this magic property of a filesystem, after that every mkdir *in the 
  root* of
  that filesystem creates a new filesystem. The new filesystems have
  default/inherited properties except for the magic property which is off.
  
  Right now I see this as being mostly useful for /home. Main benefit in this 
  case
  is that various user administration tools can work unmodified and do the 
  right
  thing when an administrator wants a policy of a separate fs per user
  But I am sure that there could be other interesting uses for this.
 
 This feature request touches upon a very generic observation that my
 group made a long time ago: ZFS is a wonderful filesystem, the only
 trouble is that (almost) all the cool features have to be asked for
 using non-filesystem (POSIX) APIs. Basically everytime you have
 to do anything with ZFS you have to do it on a host where ZFS runs.
 
 The sole exception from this rule is .zfs subdirectory that lets you
 have access to snapshots without explicit calls to zfs(1M). 
 
 Basically .zfs subdirectory is your POSIX FS way to request two bits
 of ZFS functionality. In general, however, we all want more.
 
 On the read-only front: wouldn't it be cool to *not* run zfs sends 
 explicitly but have:
 .zfs/send/snap name
 .zfs/sendr/from-snap-name-to-snap-name
 give you the same data automagically? 
 
 On the read-write front: wouldn't it be cool to be able to snapshot
 things by:
 $ mkdir .zfs/snapshot/snap-name
 ?

Are you sure this doesn't work on Solaris/OpenSolaris? From looking at
the code you should be able to do exactly that as well as destroy
snapshot by rmdir'ing this entry.

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
p...@freebsd.org   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpZJahRvw8OH.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature proposal: differential pools

2006-07-27 Thread Henk Langeveld

Andrew [EMAIL PROTECTED] wrote:

Since ZFS is COW, can I have a read-only pool (on a central file
server, or on a DVD, etc) with a separate block-differential pool on
my local hard disk to store writes?

This way, the pool in use can be read-write, even if the main pool
itself is read-only, without having to make a full local copy of that
read-only pool in order to be able to write to it, and without having
to use messy filesystem-level union filesystem features.



Matthew Ahrens wrote:

These are some interesting use cases.  I'll have to ponder how they
could be best implemented in ZFS.

Some of these cases can be solved simply by having a read-only device in
the pool (eg. live-boot DVDs).  However, cases where you want to
transfer the data between devices are somewhat nontrivial (eg. hard
drive + flash memory), at least until we have 4852783 reduce pool capacity.

I've filed RFE 6453741 want mostly-read-only devices to remember this
request.



I can see where this will lead to eventually...

I've seen several scenarios where 4852783 becomes essential.  How to implement
such a thing?  You first mark the vdev (do I say that correctly) as evicting.
Then you start a full scrub/rewrite of the whole pool, with any evict-ing
components not available for writing, so the data is forced to be copied 
elsewhere.

Once the scrub finishes, you change the state of the device to evicted and
remove it from the pool.

Odd thing is, with a read-only, or read-mostly device, you cannot put these
marks on the original disk in the first place. This is slightly contrary to
the concept of zfs storing all of its configuration and meta-data on-disk.

Which to me implies, that before you can actually start scrubbing the pool, you 
first have to copy ALL metadata from the evicted device to the other devices in

the pool.

A consequence of this whole process is that this results in yet another
method of installing the OS:

- boot from a r/o zfs source image (dvd)
- add sufficient storage to capture changes.
- mark session as one of:
  o transient   - discard change-pool on halt/crash/reboot
  o persistent - retain changes on reboot
  o permanent - make the pool bootable independantly from the original r/o 
image.
- followed by the parallel 
- evict the source-image

- configure the system, and purge any history.
- if desired, reboot...

A final implication is that on such a system, you cannot ever use the
original source image, unless you tell the system explicitly NOT to import
any zfs devices besides the image.

The architecture of zfs is fascinating.


Cheers,
Henk

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Feature proposal: trashcan via auto-snapshot with every txg commit

2006-07-26 Thread Andrew
Do an automatic pool snapshot (using the recursive atomic snapshot feature that 
Matt Ahrens implemented recently, taking time proportional to the number of 
filesystems in the pool) upon every txg commit.

Management of the trashcan snapshots could be done by some user-configurable 
policy such as preserving only a certain number of trashcan shapshots, or only 
the ones younger than a specified age, or destroying old ones at a sufficient 
rate to maintain the trashcan snapshots' total disk space usage within some 
specified quota (or to maintain pool free space above some specified minimum), 
etc.

But this would provide an effective cure for the all-to-common mistakes of 
running rm * in the wrong directory or overwriting the wrong file and 
realizing the mistake just a moment after you've pressed enter, among other 
examples.

Even if this pool-wide feature would be undesirable on a particular pool due to 
performance concerns, it could still be applied on a filesystem basis. For 
example, /home might be a good candidate.

A desire has been mentioned elsewhere in this forum for a snapshot-on-write 
feature, to which a response was made that auto-snapshotting for every byte 
written to every file would be really slow, to which a response was made that 
auto-snapshotting upon file closure might be an adequate substitute. But the 
latter isn't an adequate substitute in some important cases. Providing 
auto-snapshot on every txg commit would be an efficient compromise.

Also, combining the trashcan snapshot feature (with the management policy set 
to never delete old snapshots) with the differential pool feature I mentioned 
today in another message (with the differential pool located on physically 
secure media) would provide an excellent auditing tool.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss