On Thu, Apr 04, 2002 at 07:49:15AM +0200, Rob van der Heij wrote: > I'm about to program changes to critical files. How does > one do that in a reliable way? > > On CMS one would do it like this: > - create a new file out of the current one > - rename current to backup with NOUPDIR option > - rename new to current > The NOUPDIR prevents update on disk, so the next rename > effectively does both in one go. When writing the directory > to disk CMS itself creates the new one, and does a final > write to swap between the old and new one. If anytime during > this process the light would go out, I would still have a > consistent disk (either with or without the change). > > How do you do this with Linux. I suppose I should minimize > the window by creating a new file and then do the renames. > But the way dirty pages are written to disk I could end up > with a disk that has the new directory but not the new file? > I don't think I can tell Linux to commit the change to disk, > so should I do a sync before the renames and assume that the > two renames short after each other will be written out in a > single I/O operation?
Write into a new temporary filename that is in the same directory, then rename into the final filename. Catch some important signals like HUP and TERM and add cleanup routines to your program. You can use special temporary filenames in case no cleanup is possible and they stay on the filesystem. With some more efford, make sure you create a new "temporary dir" for each possible filesystem and create your new files in it. Then rename the files out of this dir into the final destination. cu, Florian La Roche
