On Apr 25, 2012, at 2:26 PM, Paul Archer wrote:

> 2:20pm, Richard Elling wrote:
> 
>> On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
>> 
>>            Interesting, something more complex than NFS to avoid the 
>> complexities of NFS? ;-)
>> 
>>      We have data coming in on multiple nodes (with local storage) that is 
>> needed on other multiple nodes. The only way
>>      to do that with NFS would be with a matrix of cross mounts that would 
>> be truly scary.
>> Ignoring lame NFS clients, how is that architecture different than what you 
>> would have 
>> with any other distributed file system? If all nodes share data to all other 
>> nodes, then...?
>>  -- richard
>> 
> 
> Simple. With a distributed FS, all nodes mount from a single DFS. With NFS, 
> each node would have to mount from each other node. With 16 nodes, that's 
> what, 240 mounts? Not to mention your data is in 16 different 
> mounts/directory structures, instead of being in a unified filespace.

Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents). 
FWIW,
automounters were invented 20+ years ago to handle this in a nearly seamless 
manner.
Today, we have DFS from Microsoft and NFS referrals that almost eliminate the 
need
for automounter-like solutions.

Also, it is not unusual for a NFS environment to have 10,000+ mounts with 
thousands
of mounts on each server. No big deal, happens every day.

On Apr 25, 2012, at 2:53 PM, Nico Williams wrote:
> To be fair NFSv4 now has a distributed namespace scheme so you could
> still have a single mount on the client.  That said, some DFSes have
> better properties, such as striping of data across sets of servers,
> aggressive caching, and various choices of semantics (e.g., Lustre
> tries hard to give you POSIX cache coherency semantics).


I think this is where the real value is. NFS & CIFS are intentionally generic 
and have
caching policies that are favorably described as generic. For special-purpose 
workloads 
there can be advantages to having policies more explicitly applicable to the 
workload.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to