It sounds as if your pipeline is configured to use minidisk semantics
in SFS.  Use >sfs to be sure to use SFS semantics and the add SAFE to
ensure that the file is not trashed on errors.   j.

On 12 August 2010 10:08, Colin Allinson <[email protected]> wrote:
> I had an interesting funny, (which I have now resolved), that I thought I
> would mention just in case anyone else meets it.
>
> I have an SFS repository for logs from non-VM systems. For a number of
> reasons the external system sends a new cumulative daily log at intervals
> (via FTP to a holding area) rather than just the updates to be appended.
> The server that updates the repository processes the log through a
> pipeline to remove extraneous garbage then writes the result replacing the
> existing file (with a '>' stage).
>
> So far, so good, but we have more logs coming being received than 1 server
> can cope with so we have 5 different servers all updating the SFS
> repository. No problem - they are each dealing with separate logs with
> discrete filenames.
>
> However, occasionally (1-2 times a day) the pipeline writing the file
> would report an error (usually with a RC=16 or RC=118). I could see that
> it was apparently having difficulty with a rename of a CMSUT2 file where
> the filename was some hex value. My supposition is that:-
>
> a)      PIPE internally renames the existing file to be replaced in case
> of failure and rollback
> b)      The filename is some function of time.
> c)      Very occasionally 2 servers would be doing exactly the same thing
> at the same time so they would clash.
>
> I resolved this by erasing the output file before running the pipe to
> replace it. If I had been more concerned I could have renamed the output
> file myself and then erased after the successful completion of the pipe.
>
> This will only be an issue where you have multiple servers or users
> updating the same SFS repository but then it could happen on rare
> occasions.
>
>
> Colin Allinson
> VM Systems Support
> Amadeus Data Processing GmbH
>

Reply via email to