Well, it just so happens that we have a (wicked) fast NFS server in- house, which has the added benefit of being fault-tolerant, reliable, redundant (and clearly fully buzz-word compliant). The way I look at it, I'd *love* to use it for this purpose, and if flush() is pretty much the only constraint, I doubt that I have an issue. Clearly, however, the 'update=false' (or stale='ok', or whatever) will be a much bigger problem.

That said, any particular reason that we dont want to have a Read-Only view? I know that in our particular case (processing vast quantities of CDRs), i really don't care about the data/indexes being up to date - heck accuracy to the nearest hour will do too :-)

cheers
---
Mahesh Paolini-Subramanya
CTO,  Aptela Inc.
(703.386.1500 x9100)
http://www.aptela.com

On Nov 11, 2008, at 12:26 PM, ara.t.howard wrote:


On Nov 11, 2008, at 10:03 AM, Mahesh Paolini-Subramanya wrote:

Assuming just one CouchDB server, would having the data store be NFS
based even work?
What about multiple servers?  I can definitely see chaos emerging
from multiple servers building View indexes simultaneously, but what
if read-access was done with 'update=false'?
What about writes with multiple servers?

Just wondering if there are any *immediate* gotchas that I should be
aware of (e.g., NFS - Just Say No :-)  )

it depends *greatly* on what your NFS sever is.  i've run linux-ha
machines with stonith serving postgresql on top of NFS and it was more
robust and faster that local disk.  how is that you ask?  huge ram/
cache and batter backed ram - thus any call to 'flush()' is
effectively a noop and therefore blindingly fast *and* robust.  we
failed over manually every monday and did so for over 3 years in a
24x7 situation with zero hitches.    this was a 250k netapp however.
if your NFS box is just a linux box w/o special hardware i'd be nervous.

cheers.

a @ http://codeforpeople.com/
--
we can deny everything, except that we have the possibility of being
better. simply reflect on that.
h.h. the 14th dalai lama




Reply via email to