[EMAIL PROTECTED] wrote:

>
> >Mike Dotson <[EMAIL PROTECTED]> wrote:
> >
> >> Create 20k zfs file systems and reboot.  Console login waits for all the
> >> zfs file systems to be mounted (fully loaded 880, you're looking at
> >> about 4 hours so have some coffee ready).
> >
> >Does this mean, we will get quotas for ZFS in the future?
> >
> >We need it e.g. for the Berlios fileserver. It will stay with UFS as long
> >as we did not find another quota solution.
>
> Would it help if filesystems were much cheaper so you could use
> per user filesystems?

The problem is that Berlios is created on top of projects that
are run by persons.

Each person owns a home directory and ad a member of a project group is
able to put data to the virtual web server, the ftp project dir and to
a web based download.

Ideally we need group quotas for this and I am still thinking about 
group quotas after we did fix some ufs quota issues (I need to run
quotacheck at least once a month in order to get rid of too high quota
that may be a result of background deletes). 

If we would use many filesystems, we would need to mount them to a neutral 
place, let them contain all subdirectories for all services and then
create symlinks from the service root directories to the project filesystems
in order to let all files from a project be on a single filesystem. But then
we would currently need 30000 filesystems. I suspect this will not work
with Linux clients even though we might not need all of them to be mounted
all the time.


> To me, ZFS has not fullfilled the promise of "cheap filesystems";
> cheap to create, but much more expensive to mount.
>
> Trigger mounts should fix that and one of the reason automount
> can cheaply create 1000s of mounts is that it simply sits and
> waits on /home and does not actually perform the mounts or even
> the bookkeeping associated with creating them.
>
> As zfs is by and large hierarchical in the same nature, an
> /export/home zfs filesystem would need only one trigger mount
> point (and a trigger mount could trigger other trigger mounts,
> clearly)

If I could create a ZFS pool for /export/nfs and export only 
/export/nfs while "subdirectories" inside /export/nfs are those
30000 filesystems, it could work.


> So when you share /export/home/casper NFS should really only
> know that pathname; when someone requests the mount NFS should
> then trigger the mount simply by doing the appropriate getattr
> calls.

I am not sure whether I did understand you correctly.
Do you think of a similar solution as I did mention above?
If yes, this could work if a BFS root/mount filehandle for 
/export/nfs would allow to traverse to /export/nfs/groups/star
although this is a "filesystem" in a pool.

> Some of the issues with ZFS are not that ZFS is a departure
> from "the old ways"; but that the rest of the system needs
> to follow suit; ZFS is not yet integrated enough.

ZFS is only a few years old now ;-) UFS started in 1981.

We need a solution that does not only work for Solaris clients but
for NFS clients of other platforms too. Some of our servers that are
NFS clients to the global file server run Linux.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
       [EMAIL PROTECTED]                (uni)  
       [EMAIL PROTECTED]     (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to