Paul B. Henson wrote:
But all quotas were set in a single flat text file. Anytime you added a new
quota, you needed to turn off quotas, then turn them back on, and quota
enforcement was disabled while it recalculated space utilization.
I believe in later versions of the OS 'quota resize' did
Paul B. Henson wrote:
One issue I have is that our previous filesystem, DFS, completely spoiled
me with its global namespace and location transparency. We had three fairly
large servers, with the content evenly dispersed among them, but from the
perspective of the client any user's files were
Richard Elling wrote:
I think this is a systems engineering problem, not just a ZFS problem.
Few have bothered to look at mount performance in the past because
most systems have only a few mounted file systems[1]. Since ZFS does
file system quotas instead of user quotas, now we have the
Brian H. Nelson wrote:
IMO, the quota-per-file-system approach seems inconvenient when you get
past a handful of file systems. Unless I'm really missing something, it
just seems like a nightmare to have to deal with such a ridiculous
number of file systems.
Seconded -- is there any chance
Since we're talking about various hardware configs, does anyone know
which controllers with battery backup are supported on Solaris? If
we build a big ZFS box I'd like to be able to turn on write caching
on the drives but have them battery-backed in the event of a power
loss. Are 3ware cards going
Robert Milkowski wrote:
Hello James,
Wednesday, January 24, 2007, 3:20:14 PM, you wrote:
JFH Since we're talking about various hardware configs, does anyone know
JFH which controllers with battery backup are supported on Solaris? If
JFH we build a big ZFS box I'd like to be able to turn
Eric Schrock wrote:
On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote:
- I know I can attach it via the zpool commands, but is there a way to
kickstart the attachment process if it fails to attach automatically upon
disk failure?
Yep. Just do a 'zpool replace zmir target spare'.
Jim Davis wrote:
Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b
Yes. On our undergrad timesharing system (~1300 logins) we actually hit
that limit with a standard automounting scheme. So now we make static
mounts of the Netapp
Eric Schrock wrote:
On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote:
Sure, but that's what I want to avoid. The FMA agent should do this by
itself, but it's not, so I guess I'm just wondering why, or if there's
a good way to get to do so. If this happens in the middle
Eric Schrock wrote:
Hmmm, it means that we correctly noticed that the device had failed, but
for whatever reason the ZFS FMA agent didn't correctly replace the
drive. I am cleaning up the hot spare behavior as we speak so I will
try to reproduce this.
Ok, great.
Well, as long as I know
[ Sorry, this bounced the first time so I subscribed to the list ]
Sanjeev Bagewadi wrote:
Jim,
We did hit similar issue yesterday on build 50 and build 45 although the
node did not hang.
In one of the cases we saw that the hot spare was not of the same
size... can you check
if this true ?
It
11 matches
Mail list logo