Paul B. Henson wrote:
But all quotas were set in a single flat text file. Anytime you added a new
quota, you needed to turn off quotas, then turn them back on, and quota
enforcement was disabled while it recalculated space utilization.
I believe in later versions of the OS 'quota resize' did
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
On Sat, 22 Sep 2007, Peter Tribble wrote:
filesystem per user on the server, just to see how it would work. While
managing 20,00 filesystems with the automounter was trivial, the attempt
to manage 20,000 zfs filesystems wasn't entirely
On Mon, 24 Sep 2007, Dale Ghent wrote:
Not to sway you away from ZFS/NFS considerations, but I'd like to add
that people who in the past used DFS typically went on to replace it with
AFS. Have you considered it?
You're right, AFS is the first choice coming to mind when replacing DFS. We
On Tue, 25 Sep 2007, Peter Tribble wrote:
This was some time ago (a very long time ago, actually). There are two
fundamental problems:
1. Each zfs filesystem consumes kernel memory. Significant amounts, 64K
is what we worked out at the time. For normal numbers of filesystems that's
not a
Paul B. Henson wrote:
On Thu, 20 Sep 2007, James F. Hranicky wrote:
This can be solved using an automounter as well.
Well, I'd say more kludged around than solved ;), but again unless
you've used DFS it might not seem that way.
It just seems rather involved, and relatively inefficient
Paul B. Henson wrote:
On Fri, 21 Sep 2007, James F. Hranicky wrote:
It just seems rather involved, and relatively inefficient to continuously
be mounting/unmounting stuff all the time. One of the applications to be
deployed against the filesystem will be web service, I can't really
envision
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Richard Elling wrote:
50,000 directories aren't a problem, unless you also need 50,000 quotas
and hence 50,000 file systems. Such a large, single storage pool system
will be an outlier... significantly beyond what we have real world
experience
On Fri, 21 Sep 2007, Ed Plese wrote:
ZFS ACL support was going to be merged into 3.0.26 but 3.0.26 ended up
being a security fix release and the merge got pushed back. The next
release will be 3.2.0 and ACL support will be in there.
Arg, you're right, I based that on the mailing list
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
but checking the actual release notes shows no ZFS mention. 3.0.26 to
3.2.0? That seems an odd version bump...
3.0.x and before are GPLv2. 3.2.0 and later are GPLv3.
http://news.samba.org/announcements/samba_gplv3/
--
Mike Gerdts
On Mon, 24 Sep 2007, Richard Elling wrote:
I can't imagine a web server serving tens of thousands of pages. I think
you should put a more scalable architecture in place, if that is your
goal. BTW, there are many companies that do this: google, yahoo, etc.
In no case do they have a single
On Mon, 24 Sep 2007, Richard Elling wrote:
Yes. Sun currently has over 45,000 users with automounted home
directories. I do not know how many servers are involved, though, in part
because home directories are highly available services and thus their
configuration is abstracted away from the
Paul B. Henson wrote:
On Sat, 22 Sep 2007, Jonathan Loran wrote:
My gut tells me that you won't have much trouble mounting 50K file
systems with ZFS. But who knows until you try. My questions for you is
can you lab this out?
Yeah, after this research phase has been completed,
On Sep 24, 2007, at 6:15 PM, Paul B. Henson wrote:
Well, considering that some days we automatically create accounts for
thousands of students, I wouldn't want to be the one stuck typing 'zfs
create' a thousand times 8-/. And that still wouldn't resolve our
requirement for our help desk staff
On 9/22/07, Paul B. Henson [EMAIL PROTECTED] wrote:
On Fri, 21 Sep 2007, James F. Hranicky wrote:
It just seems rather involved, and relatively inefficient to continuously
be mounting/unmounting stuff all the time. One of the applications to be
deployed against the filesystem will be
Paul,
My gut tells me that you won't have much trouble mounting 50K file
systems with ZFS. But who knows until you try. My questions for you is
can you lab this out? you could build a commodity server with a ZFS
pool on it. Heck it could be a small pool, one disk, and then put your
50K
On 9/20/07 7:31 PM, Paul B. Henson [EMAIL PROTECTED] wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
It's an IBM re-branded NetApp which can which we are using for NFS and
iSCSI.
Yeah its fun to see IBM compete with its OEM provider Netapp.
Ah, I see.
Is it comparable storage
On 9/20/07, Paul B. Henson [EMAIL PROTECTED] wrote:
Again though, that would imply two different storage locations visible to
the clients? I'd really rather avoid that. For example, with our current
Samba implementation, a user can just connect to
'\\files.csupomona.edu\username' to access
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from buying two
instead of another shelf is the fact that we have lost pools on Sol10u3
servers and there is no easy way of making two pools redundant (ie the
complexity of clustering.) Simply
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from buying two
instead of another shelf is the fact that we have lost pools on Sol10u3
servers and there is no easy way of making two pools redundant (ie the
complexity
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from
buying two
instead of another shelf is the fact that we have lost pools on
Sol10u3
servers and there is no easy
eric kustarz wrote:
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
m2# zpool create test mirror iscsi_lun1 iscsi_lun2
m2# zpool export test
m1# zpool import -f test
m1# reboot
m2# reboot
Since I haven't actually looked into what problem caused your pools to
become damaged/lost, i can
On Thu, 20 Sep 2007, eric kustarz wrote:
As far as quotas, I was less than impressed with their implementation.
Would you mind going into more details here?
The feature set was fairly extensive, they supported volume quotas for
users or groups, or qtree quotas, which similar to the ZFS quota
On Fri, 21 Sep 2007, Andy Lubel wrote:
Yeah its fun to see IBM compete with its OEM provider Netapp.
Yes, we had both IBM and Netapp out as well. I'm not sure what the point
was... We do have some IBM SAN equipment on site, I suppose if we had gone
with the IBM variant we could have
On Fri, 21 Sep 2007, James F. Hranicky wrote:
It just seems rather involved, and relatively inefficient to continuously
be mounting/unmounting stuff all the time. One of the applications to be
deployed against the filesystem will be web service, I can't really
envision a web server with
On Fri, 21 Sep 2007, Mike Gerdts wrote:
MS-DFS could be helpful here. You could have a virtual samba instance
that generates MS-DFS redirects to the appropriate spot. At one point in
That's true, although I rather detest Microsoft DFS (they stole the acronym
from DCE/DFS, even though
On Fri, 21 Sep 2007, Tim Spriggs wrote:
Still, we are using ZFS but we are re-thinking on how to deploy/manage
it. Our original model had us exporting/importing pools in order to move
zone data between machines. We had done the same with UFS on iSCSI
[...]
When we don't move pools around, zfs
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote:
I was planning to provide CIFS services via Samba. I noticed a posting a
while back from a Sun engineer working on integrating NFSv4/ZFS ACL
support
into Samba, but I'm not sure if that was ever completed and shipped
a few comments below...
Paul B. Henson wrote:
We are looking for a replacement enterprise file system to handle storage
needs for our campus. For the past 10 years, we have been happily using DFS
(the distributed file system component of DCE), but unfortunately IBM
killed off that product and
Paul B. Henson wrote:
One issue I have is that our previous filesystem, DFS, completely spoiled
me with its global namespace and location transparency. We had three fairly
large servers, with the content evenly dispersed among them, but from the
perspective of the client any user's files were
On 9/20/07 3:49 PM, Paul B. Henson [EMAIL PROTECTED] wrote:
On Thu, 20 Sep 2007, Richard Elling wrote:
50,000 directories aren't a problem, unless you also need 50,000 quotas
and hence 50,000 file systems. Such a large, single storage pool system
will be an outlier... significantly beyond
Andy Lubel wrote:
On 9/20/07 3:49 PM, Paul B. Henson [EMAIL PROTECTED] wrote:
On Thu, 20 Sep 2007, Richard Elling wrote:
That would also be my preference, but if I were forced to use hardware
RAID, the additional loss of storage for ZFS redundancy would be painful.
Would anyone
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote:
On Thu, 20 Sep 2007, Richard Elling wrote:
50,000 directories aren't a problem, unless you also need 50,000 quotas
and hence 50,000 file systems. Such a large, single storage pool system
will be an outlier... significantly
On Thu, Sep 20, 2007 at 16:22:45 -0500, Gary Mills wrote:
: You should consider a Netapp filer. It will do both NFS and CIFS,
: supports disk quotas, and is highly reliable. We use one for 30,000
: students and 3000 employees. Ours has never failed us.
And they might only lightly sue you for
On Thu, 20 Sep 2007, James F. Hranicky wrote:
This can be solved using an automounter as well.
Well, I'd say more kludged around than solved ;), but again unless
you've used DFS it might not seem that way.
It just seems rather involved, and relatively inefficient to continuously
be
On Thu, 20 Sep 2007, Andy Lubel wrote:
Looks like its completely scalable but your boot time may suffer the more
you have. Just don't reboot :)
I'm not sure if it's accurate, but the SE we were meeting with claimed that
we could failover all of the filesystems to one half of the cluster,
On Thu, 20 Sep 2007, Tim Spriggs wrote:
We are in a similar situation. It turns out that buying two thumpers is
cheaper per TB than buying more shelves for an IBM N7600. I don't know
about power/cooling considerations yet though.
It's really a completely different class of storage though,
On Thu, 20 Sep 2007, Gary Mills wrote:
You should consider a Netapp filer. It will do both NFS and CIFS,
supports disk quotas, and is highly reliable. We use one for 30,000
students and 3000 employees. Ours has never failed us.
We had actually just finished evaluating Netapp before I
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
We are in a similar situation. It turns out that buying two thumpers is
cheaper per TB than buying more shelves for an IBM N7600. I don't know
about power/cooling considerations yet though.
It's really a completely
Paul B. Henson wrote:
On Thu, 20 Sep 2007, James F. Hranicky wrote:
and due to the fact that snapshots counted toward ZFS quota, I decided
Yes, that does seem to remove a bit of their value for backup purposes. I
think they're planning to rectify that at some point in the future.
We're
On Thu, 20 Sep 2007, Tim Spriggs wrote:
It's an IBM re-branded NetApp which can which we are using for NFS and
iSCSI.
Ah, I see.
Is it comparable storage though? Does it use SATA drives similar to the
x4500, or more expensive/higher performance FC drives? Is it one of the
models that allows
On Thu, 20 Sep 2007, Chris Kirby wrote:
We're adding a style of quota that only includes the bytes referenced by
the active fs. Also, there will be a matching style for reservations.
some point in the future is very soon (weeks). :-)
I don't think my management will let me run Solaris
Paul B. Henson wrote:
Is it comparable storage though? Does it use SATA drives similar to the
x4500, or more expensive/higher performance FC drives? Is it one of the
models that allows connecting dual clustered heads and failing over the
storage between them?
I agree the x4500 is a sweet
On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:
On Thu, 20 Sep 2007, Gary Mills wrote:
You should consider a Netapp filer. It will do both NFS and CIFS,
supports disk quotas, and is highly reliable. We use one for 30,000
students and 3000 employees. Ours has never failed us.
We had
43 matches
Mail list logo