Matt W. Benjamin wrote: > Providing a separate space for snapshot identifiers is a good idea. I don't > intuitively see the motivation for making it a timestamp, however. Every > snapshot surely will a creation time, but it might have other attributes a > client would want to use to identify it. More importantly, even assuming > continuous, point-in-time snapshots (every time something changes), the > mapping of timestamps to snapshots at the suggested resolution is very > sparse. My suggestion for snapshot identifiers would simply be a generation > number, and I'd hope to see follow-on discussion about how database servers > (or external databases) should best be used to identify snapshots, using what > criteria (most recent, earliest before a given time, the one associated with > a specific tag, or whatever) The reason for using a timestamp as the snapshot identifier is quite straight forward. Client systems such as MacOS X Time Machine and Microsoft Windows permit snapshots to be accessed by timestamp. Microsoft for example uses "<filename>[:<stream>][:<timestamp>]" as the input to CreateFile(). If a <timestamp> is provided, that indicates that the version of the file known to the file server prior to the specified timestamp is the one to be used. If the timestamp is the snapshot identifier then lookups become much easier.
As for the resolution, Windows systems use a FILETIME with 100-nanosecond resolution. File creation on current Windows platforms have a 1ms resolution and it is getting finer as hardware gets faster. The worst case scenarios for an automated snapshot system is that every change produces a new snapshot. As a result, timestamps and therefore snapshot identifiers (if you adopt this model) require a very fine grained resolution. Jason asked: > would the proposal allow for clones of the same snapshot to reside on multiple servers? In this context a ,readonly volume is a single instance of a snapshot. I would like to see "snapshots" provide for an arbitrary number of readonly versions of a volume that can of course be replicated. In fact, this is how I would implement read/write replication. Each change to the master copy of the volume produces a new snapshot which is then lazily replicated to other file servers. Jeffrey Altman _______________________________________________ AFS3-standardization mailing list [email protected] http://michigan-openafs-lists.central.org/mailman/listinfo/afs3-standardization
