On Fri, 12 Feb 2016 10:32:59 -0800 (PST)
Sarvi Shanmugham <sarvil...@gmail.com> wrote:
> I am looking to implement a workflow that involves
> 1. Developers committing a sequence of change sets, into a staging
> 2. Where these change sets go through some sanity testing at
> periodic points, say every 10 commits
> 3. If there is a failure, we would like to implement automatic
> bisection, building and testing to find the bad patch that is causing
> the tests to fail.
> 4. We would like to now be able to completely eject/remove the
> commit/patch from the staging git repository, as if it never went in,
> as well as any other commits that might be related to it that came in
> after that, and NOT just attempt to reverse it by committing a
> reversal patch at the end.
Well, while I was writing my original reply I was feeling like sensing
an overengineering smell ;-)
After sending the reply, it dawned on me -- why I was feeling like that.
It looks like you're trying to wedge "integration" (staging, checking,
weeding out bad commits) with "sharing" (the repo where everyone pushes
to and fetches from). This might be a bad idea--usually instilled by
a centralized VCS where you have only a single repo. In , Linus
Torvalds explains to the KDE folks how they could go about converting
their centralized workflow into a semi-hierarchical *set* of distinct
Git repos. I feel like this might strike a chord in you: it might turn
out you could get better luck with having a dedicated
staging/integration repo which occasionally pushes "known-good" history
to some other--"sharing"--repo, which is used by the devs to fetch work
done by others.
You received this message because you are subscribed to the Google Groups "Git
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.