On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
one from having, on a single file server, /exports/nodes/node[0-15],
having each node NFS-mount /exports/nodes from the server? Much simplier
your example, and all data is available on all machines/nodes.
This solution would limit bandwidth to that available from that single
server. With the cluster approach, the objective is for each machine
in the cluster to primarily access files which are stored locally.
Whole files could be moved as necessary.
Distributed software building faces similar issues, but I've found once
the common files have been read (and cached) by each node, network
traffic becomes one way (to the file server). I guess that topology
works well when most access to shared data is read.
Which reminds me: older Solarises used to have a nifty-looking
(via descriptions) cachefs, apparently to speed up NFS clients
and reduce traffic, which we did not get to really use in real
life. AFAIK Oracle EOLed it for Solaris 11, and I am not sure
it is in illumos either.
Does caching in current Solaris/illumos NFS client replace those
benefits, or did the project have some merits of its own (like
caching into local storage of client, so that the cache was not
empty after reboot)?
zfs-discuss mailing list