Reducing the record size would negatively impact performance. For rational why, see thesection titled "Match Average I/O Block Sizes" in my blog post on filesystem caching:http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htmlBrad
Brad Diggs | Principal Sales
effectively leverage this caching potential, that won't happen. OUD far outperforms ODSEE. That said OUD may get some focus in this area. However, time willtell on that one.For now, I hope everyone benefits from the little that I did validate.Have a great day!Brad
Brad Diggs | Principal Sales
S11 FCSBrad
Brad Diggs | Principal Sales Consultant |972.814.3698eMail:brad.di...@oracle.comTech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs
On Dec 29, 2011, at 8:11 AM, Robert Milkowski wrote:And these results are from S11 FCS I assume.On older builds or Illumos
/02/directory-data-priming-strategies.htmlThanks again!Brad
Brad Diggs | Principal Sales ConsultantTech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs
On Dec 8, 2011, at 4:22 PM, Mark Musante wrote:You can see the original ARC case here:http://arc.opensolaris.org
that the L1ARCwill also only require 1TB of RAM for the data.Note that I know the deduplication table will use the L1ARC as well. However, the focus of my questionis on how the L1ARC would benefit from a data caching standpoint.Thanks in advance!Brad
Brad Diggs | Principal Sales ConsultantTech Blog:http
Has anyone done much testing of just using the solid state devices (F20 or F5100) asdevices for ZFS pools? Are there any concerns with running in this mode versus usingsolid state devices for L2ARC cache?Second, has anyone done this sort of testing with MLC based solid state drives?What has your
to have someone do some benchmarkingof MySQL in a cache optimized server with F20 PCIe flash cards but never got around to it.So, if you want to get all of the caching benefits of DmCache, just run your app on Solaris 10 today. ;-)Have a great day!Brad Brad Diggs | Principal Security Sales Consultant
Have you considered running your script with ZFS pre-fetching disabled
altogether to see if
the results are consistent between runs?
Brad
Brad Diggs
Senior Directory Architect
Virtualization Architect
xVM Technology Lead
Sun Microsystems, Inc.
Phone x52957/+1 972-992-0002
Mail
You might want to have a look at my blog on filesystem cache
tuning... It will probably help
you to avoid memory contention between the ARC and your apps.
http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.html
Brad
Brad Diggs
Senior Directory Architect
PM 6/5/, Brad Diggs wrote:
Hi Keith,
Sure you can truncate some files but that effectively corrupts
the files in our case and would cause more harm than good. The
only files in our volume are data files.
So an rm is ok, but a truncation is not?
Seems odd to me
Is there an existing bug on this that is going to address
enabling the removal of a file without the pre-requisite
removal of a snapshot?
Thanks in advance,
Brad
--
-
_/_/_/ _/_/ _/ _/ Brad Diggs
How do you ascertain the current zfs vdev cache size (e.g.
zfs_vdev_cache_size) via mdb or kstat or any other cmd?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing
Hello,
Is the gzip compression algorithm planned to be in Solaris 10 Update 5?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Darren,
Please find responses in line below...
On Fri, 2008-02-08 at 10:52 +, Darren J Moffat wrote:
Brad Diggs wrote:
I would like to use ZFS but with ZFS I cannot prime the cache
and I don't have the ability to control what is in the cache
(e.g. like with the directio UFS
Hello,
I have a unique deployment scenario where the marriage
of ZFS zvol and UFS seem like a perfect match. Here are
the list of feature requirements for my use case:
* snapshots
* rollback
* copy-on-write
* ZFS level redundancy (mirroring, raidz, ...)
* compression
* filesystem cache control
What would you want to observe if your system hit the upper
limit in zfs_max_phys_mem?
I would want zfs to behave well and safely like every other app on which you
apply boundary conditions. It is the responsibility of zfs to know its
boundaries and stay within them. Otherwise, your system
16 matches
Mail list logo