More than that :)

It's very very short duration, but we have the potential for > 10's of thousands of clients doing writes all at the same time. I have the farm spread out over 16 servers, each with 2x 4GB fiber cards into big disk arrays, but my reads do get slow (resulting in end user experience degradation) when these write bursts come in, and if I could buffer them even for 60 seconds, it would make everything much smoother.


Is there a way to optimize the ARC for more write buffering, and push more read caching off into the L2ARC?

Again, I'm only worried about short bursts that happen once or twice a day. The rest of the time everything runs very smooth.


Thanks.



Eric D. Mudama wrote:
On Fri, Apr 10 at 8:07, Patrick Skerrett wrote:
Thanks for the explanation folks.

So if I cannot get Apache/Webdav to write synchronously, (and it does not look like I can), then is it possible to tune the ARC to be more write-buffered heavy?

My biggest problem is with very quick spikes in writes periodically throughout the day. If I were able to buffer these better, I would be in pretty good shape. The machines are already (economically) maxed out on ram at 32 gigs.

If I were to add in the SSD L2ARC devices for read caching, can I configure the ARC to give up some of it's read caching for more write buffering?

I think in most cases, the raw spindle throughput should be enough to
handle your load, or else you haven't sized your arrays properly.
Bursts of async writes of relatively large size should be headed to
the media at somewhere around 50-100MB/s/vdev I would think. How much
burst IO do you have?


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to