On Tue, 15 Jan 2013 08:17:50 -0800 (PST)
Russell <im.russell.smi...@gmail.com> wrote:
> > That way your repository always keeps "normalized" blobs.
> I agree that this is exactly what I would do to mimic RCS behaviour.
> But I am deliberately trying not to for the reason described below:
> Unfortunately using this method seems to create a certain ambiguity
> when creating a formal release (perhaps my understanding is faulty,
> so I would appreciate clarification if so). Suppose that I do all my
> final commits and am now ready for release. For release suppose that
> I do a fresh git clone. Using the RCS method every file will now be
> smudged to have the same "last modified" date equal to the date I
> cloned. This is technically not correct, although perhaps it is close
> Alternatively I could not do a fresh clone for release and use my
> working directory so that files not changed for some time might not
> be freshened by a pull and thus will have an older "last modified"
> The problem now is that if I do the release this time, and the
> next time my colleague does a release from her working directory the
> files will likely have completely different "last modified" dates
> depending on what sets of files have been changed between us and when
> the original clones of the working directories was done. In my
> opinion this is not viable since there is now no consistency at all
> in the "last modified" dates between releases.
> Perhaps this RCS method is "close enough" for most purposes, but I am
> trying to get a last modified date that is somewhat more realistic
> and probably more defensible in the eyes of the law if it was ever
Okay, so it seems what you really want is to have that "last modified"
date to be actually tied to the creation time of the commit object
which is checked out, not some sort of the "current" time at which the
smudge filter is run. If so, then update your smudge filter to call
`git rev-parse HEAD` and then parse the output of
`git cat-object commit <that_rev>` to extract the commit timestamp from
it; use this timestamp to replace the placeholder values. This would
guarantee that anyone who checks out revision XYZ would get the same
actual "last modified" date in the checked out files. One caveat of
this approach: if someone checks that revision XYZ out, then modifies
one of the files containing the placeholder and then records a new
commit, then may be they also will need to `git reset --hard` before
cutting a release out of their checkout to "refresh" the actual
timestamps in the files according to the updated HEAD.
In any case, I start to think your approach is flawed. What if you
quit updating those placeholders using filters and instead try to
somehow formally automate the releasing process itself? I mean, when a
new release is due, someone runs a script which does `git archive` on a
blessed "release" tag and then runs through the exported tree replacing
all those placeholders in the files with the timestamp obtained from
that tag's underlying commit object?