> I have a blade processor that has no local file system.  The file system
> for the blade is provided by NFS on my local ubuntu workstation.  This is
> the general program flow:
> 
> 1. My python program on the workstation creates a datafile on the
> workstation for the remote blade to process.
> 2. The blade processes the input file and creates an output file.  I just
> managed to get this to work using paramiko.  So the blade has to transfer
> the data from NFS to local memory, process it and send it back from its
> local memory back to NFS, which is on the ubuntu box.
> 3. My python program then reads the file and displays the surface generated
> by the blade.
> 
> Since the files are all on my ubuntu box, I'd like to minimize the gbit
> network traffic to and from the blade.  I would think that the python step
> 3 file read would not have to be done through paramiko to the blade, but
> rather directly from the /srv/nfsroot/stuff directory on my ubuntu box.

Well, you could unplug the blade server and do all three steps on your
workstation.  I'll assume there's a good reason for doing step 2 on the
blade server.  There should be no problem executing step 3 on your
workstation.  NFS V3 does have a few "features" that can cause some
grief with local file system caches, however in your case "close-to-open"
consistency should handle that.  Basically, when the blade closes the file
that should force the kernel to send over all the data to the workstation,
and then when you open the file on the workstation you shold be all set.
(On a different client, the open should force checking with the server to
see if the file changes, and even that should work.)

NFS locking is only important if two clients both have the file open and
are modifying it.

   -Ric Werme
-- 
  Coming soon - which way is the climate changing?
http://WermeNH.com/climate/science.html

_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to