On 04/26/12 04:17 PM, Ian Collins wrote:
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
one from having, on a single file server, /exports/nodes/node[0-15],
having each node NFS-mount /exports/nodes from the server? Much
your example, and all data is available on all machines/nodes.
This solution would limit bandwidth to that available from that single
server. With the cluster approach, the objective is for each machine
in the cluster to primarily access files which are stored locally.
Whole files could be moved as necessary.
Distributed software building faces similar issues, but I've found once
the common files have been read (and cached) by each node, network
traffic becomes one way (to the file server). I guess that topology
works well when most access to shared data is read.
Which reminds me: older Solarises used to have a nifty-looking
(via descriptions) cachefs, apparently to speed up NFS clients
and reduce traffic, which we did not get to really use in real
life. AFAIK Oracle EOLed it for Solaris 11, and I am not sure
it is in illumos either.
I don't think it even made it into Solaris 10.. I used to use it with
Solaris 8 back in the days when 100Mb switches were exotic!
cachefs is present in Solaris 10. It is EOL'd in S11.
It did have local backing store, but my current desktop has more RAM
than that Solaris 8 box had disk and my network is 100 times faster,
so it doesn't really matter any more.
Does caching in current Solaris/illumos NFS client replace those
benefits, or did the project have some merits of its own (like
caching into local storage of client, so that the cache was not
empty after reboot)?
zfs-discuss mailing list