The delay of writing 100+MB files out across the T-1 wouldn't be acceptable. The writes occur pretty equally at both sites, with no clear winner on that.

This is the reason I'm looking at distributed filesystems, is this is truely a need for that, both sides need high-speed RW access to the files.

So am I right in understanding that AFS doesn't allow multiple RW replicas? You have one 'master' RW, then other RO replicas, and when a write occurs, what, the whole file goes upstream, or only the changes from the clients cache? I understand the difficulties in having multiple RW replicas, the locking issues, etc, but the concern is to try and keep from transmitting entire 100MB files all the time and to allow both teams to work efficiently.



Replicating the hardware on both sides isn't a problem, just trying to avoid spending $30K+ on something from (insert large storage vendor here).

--On Saturday, December 07, 2002 10:02 PM -0600 Nathan Neulinger <[EMAIL PROTECTED]> wrote:

You don't really provide enough detail of the nature of the file access
to give a reasonable answer.

Are the files being accessed equally from both sites, or predominantly
from one? Are they mostly being read, and only rarely written to?

Is one site subordinate to the other - i.e. a primary facility and a
small adjunct group off site?

Depending on the usage patterns, AFS may or may not be a good solution
for you.

If primary access is read-only, and you don't mind some delays when
updates take place, you could easily put a file server at both sites,
and replicate the volume, with the RW copy of the volume being at the
site that does most of the RW accesses.

For read-only accesses, the local replicate can be used, for RW, the
clients would talk to the remote file server if necessary.

-- Nathan


On Sat, 2002-12-07 at 19:33, Michael Loftis wrote:
Here's the situation I've got, two sites 100+GB of files that need to be
shared between the two sites by a design group.  The sites are/will be
connected via a point to point T-1 forming a private network.

My question is which would be better at handling this sort of scenario,
and  how best to handle it.  All of the Coda systems I've examined so
far are  either a client or a server, so setting up a server on either
side as a  replicated (READ+WRITE) volume is a non-starter.

OpenAFS refused to run on any of the Linux machines I have had time to
try  it on though (usually non-fatally oopsing the kernel and/or
segfaulting but  always during the mount phase).  I don't really have
time right now  togather up all the details of the crashes, suffice to
say thats not what  I'm here for.

What I wanted to know is, in the opinion of this group, for hte
situation I  have, which would be better?  Note that these files are
100MB or so on  average and the T-1 is the only data link to the outside
world for the  office.

TIA
_______________________________________________
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info
--

------------------------------------------------------------
Nathan Neulinger                       EMail:  [EMAIL PROTECTED]
University of Missouri - Rolla         Phone: (573) 341-4841
Computing Services                       Fax: (573) 341-4216


_______________________________________________
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to