From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So now I'll change meta_max and
see if it helps...
Oh, know what? Nevermind.
I just looked at the source, and it seems arc_meta_max is just a gauge for
you to use, so you
Op 09-05-11 14:36, Edward Ned Harvey schreef:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So now I'll change meta_max and
see if it helps...
Oh, know what? Nevermind.
I just looked at the source, and it seems
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
in my previous
post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
I have the same thing. But as I sit here and run more and more extensive
tests on it
On Mon, May 9, 2011 at 2:11 AM, Evaldas Auryla evaldas.aur...@edqm.euwrote:
On 05/ 6/11 07:21 PM, Brandon High wrote:
On Fri, May 6, 2011 at 9:15 AM, Ray Van Dolsonrvandol...@esri.com
wrote:
We use dedupe on our VMware datastores and typically see 50% savings,
often times more. We do of
Op 09-05-11 15:42, Edward Ned Harvey schreef:
in my previous
post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
I have the same thing. But as I sit here and run more and more extensive
tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it
only
Greetings,
I'm about to deploy an HP DL380 running x86 solaris with a pair of P212/256 SAS
cards an HP MDS 600 with 70 x 1TB drives - each card will be connected to one
half of the MDS 600.
I'm just mulling over the best configuration for this system - our work load is
mostly writing
I've got a system with 24 Gig of RAM, and I'm running into some interesting
issues playing with the ARC, L2ARC, and the DDT. I'll post a separate thread
here shortly. I think even if you add more RAM, you'll run into what I'm
noticing (and posting about).
-Original Message-
From:
Hello,
I'm on FreeBSD 9 with ZFS v28, and it's possible this combination is causing
my issue, but I thought I'd start here first and will cross-post to the FreeBSD
ZFS threads if the Solaris crowd thinks this is a FreeBSD problem.
The issue: From carefully watching my ARC/L2ARC size and
On May 9, 2011, at 12:29 PM, Chris Forgeron wrote:
Hello,
I'm on FreeBSD 9 with ZFS v28, and it's possible this combination is causing
my issue, but I thought I'd start here first and will cross-post to the
FreeBSD ZFS threads if the Solaris crowd thinks this is a FreeBSD problem.
The
On 09 May, 2011 - Richard Elling sent me these 5,0K bytes:
of the pool -- not likely to be a winning combination. This isn't a problem
for the ARC because
it has memory bandwidth, which is, of course, always greater than I/O
bandwidth.
Slightly off topic, but we had an IBM RS/6000 43P with
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
BTW, here's how to tune it:
echo arc_meta_limit/Z 0x3000 | sudo mdb -kw
echo ::arc | sudo mdb -k | grep meta_limit
arc_meta_limit= 768 MB
Well
11 matches
Mail list logo