Hi Jason, This should have helped.
6542676 ARC needs to track meta-data memory overhead
Some of the lines to arc.c:
1551 1.36 if (arc_meta_used = arc_meta_limit) {
1552/*
1553 * We are exceeding our meta-data cache
On 9/25/07 3:37 AM, Sergiy Kolodka [EMAIL PROTECTED]
wrote:
Hi Guys,
I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle
database.
After some really simple tests I noticed that default (well, not really
default, some patches applied, but definitely noone bother
Paul B. Henson wrote:
But all quotas were set in a single flat text file. Anytime you added a new
quota, you needed to turn off quotas, then turn them back on, and quota
enforcement was disabled while it recalculated space utilization.
I believe in later versions of the OS 'quota resize' did
Where is ZFS with regards to the NVRAM cache present on arrays?
I have a pile of 3310 with 512 megs cache, and even some 3510FC with 1-gig
cache. It seems silly that it's going to waste. These are dual-controller
units so I have no worry about loss of cache information.
It looks like
On Tue, Sep 25, 2007 at 10:14:57AM -0700, Vincent Fox wrote:
Where is ZFS with regards to the NVRAM cache present on arrays?
I have a pile of 3310 with 512 megs cache, and even some 3510FC with 1-gig
cache. It seems silly that it's going to waste. These are dual-controller
units so I
On Tue, 2007-09-25 at 10:14 -0700, Vincent Fox wrote:
Where is ZFS with regards to the NVRAM cache present on arrays?
I have a pile of 3310 with 512 megs cache, and even some 3510FC with
1-gig cache. It seems silly that it's going to waste. These are
dual-controller units so I have no
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
On Sat, 22 Sep 2007, Peter Tribble wrote:
filesystem per user on the server, just to see how it would work. While
managing 20,00 filesystems with the automounter was trivial, the attempt
to manage 20,000 zfs filesystems wasn't entirely
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be possible to add something that
would indicate what LUN on
Gregory Shaw wrote:
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be possible to add something that
James C. McPherson wrote:
Gregory Shaw wrote:
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be
Tim Spriggs wrote:
James C. McPherson wrote:
Gregory Shaw wrote:
...
The above would be very useful should a disk fail to identify what
device is what.
How would you gather that information?
How would you ensure that it stayed accurate in
a hotplug world?
If it is stored on the device
On Mon, 24 Sep 2007, Dale Ghent wrote:
Not to sway you away from ZFS/NFS considerations, but I'd like to add
that people who in the past used DFS typically went on to replace it with
AFS. Have you considered it?
You're right, AFS is the first choice coming to mind when replacing DFS. We
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
How would you ensure that it stayed accurate in
On Tue, 25 Sep 2007, Peter Tribble wrote:
This was some time ago (a very long time ago, actually). There are two
fundamental problems:
1. Each zfs filesystem consumes kernel memory. Significant amounts, 64K
is what we worked out at the time. For normal numbers of filesystems that's
not a
io:::start probe does not seem to get zfs filenames in
args[2]-fi_pathname. Any ideas how to get this info?
-neel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
No consistent
It would be a manual process. As with any arbitrary name, it's a useful
tag, not much more.
James C. McPherson wrote:
Gregory Shaw wrote:
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or
James C. McPherson wrote:
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli
Greg Shaw wrote:
James C. McPherson wrote:
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal
Dale Ghent wrote:
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
The problem with this is that wrong information is much worse than no
information, there is no way to automatically validate the
information,
and therefore people are involved. If people were reliable, then even
a text
On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
Dale Ghent wrote:
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
The problem with this is that wrong information is much worse
than no
information, there is no way to automatically validate the
information,
and therefore people are
On 9/25/07, Gregory Shaw [EMAIL PROTECTED] wrote:
On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
Dale Ghent wrote:
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
The problem with this is that wrong information is much worse than no
information, there is no way to automatically
On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
I don't understand. How do you
setup one LUN that has all of the NVRAM on the array dedicated to it
I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
thick here, but can you be more specific for the n00b?
If
Please don't do this as a rule, it makes for horrendous support issues
and breaks a lot of health check tools.
Actually, you can use the existing name space for this. By default,
ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could
setup your own space, say
Anybody can tell me about RAID-Z architecture , cuz i tried to understand it by
seach in google but it doesn't clear. I don't know why it can beat raid-5. I
know it can prove about RAID-5 Write Hole cuz it has copy-on-write feature for
data intigrity. But i don't understand about write full
25 matches
Mail list logo