From: "Dale R. Worley" <wor...@alum.mit.edu>
Sent: Tuesday, January 29, 2013 7:40 PM
From: ryez <rye...@gmail.com>

1. we use the master branch to reflect what we have in production
2. for every patch, a separate patch branch is created from master
3. when a patch is ready to release/deploy, we first check if the
patch
branch can merge to master without conflict, if yes, the patch is
then:

You need something [anything?] here that allows even the simplest of
pseudo 'production' level testing, maybe on a mirror system or any other
sandbox you can cobble together that a lead cliche [half a dozen like
minded colleagues may be enough] can use to try out the 'improved'
method.

Inevitably the first attempt will in some sense 'fail', but as long as
you have a few points you can claim as success's [plan these so they are
guaranteed!] you will be able to tweak and improve the set up so you can
drag more developers into the great new method [again, make sure they
think it makes life easier/better for them], and so it begins...

MERGED to master, and DEPLOYED to production at the same time

The problems we have are:
1. integration happens only upon releasing/deployment,
2. developers tend to directly deploy patches to production with very
little testing
3. there's no clear overview picture of what's going on, since
everyone
works on him/herself, and versioning doesn't apply to our product.

My questions are:
What's your view/opinion of this?

It has the advantages and disadvantages that you describe.

How to improve the manageability and testability,

You can't improve testability unless you intend to *test*.  And your
organization is unwilling to require testing.  You could add a testing
step to your process, but it would take more effort to produce each
patch.

In one sense it is no more effort to produce the patch. If it is
successful life is good. If it is a failure it will need a fresh patch
anyway, and there will be less production fails.

However, to satisfy 'management' you probably will need to have a good historical record of the previous patch failure rate [start monitoring now], and what the failure cost in production, or the cost of fixing them.

Then perhaps claim a 'fail/fix' for any patch that didn't pass the new quick-test step, just so you can show the improvement and savings - if you don't it will just look like it took a long time to create the patch and the downsides Dale indicated.

Until management can see the benefit of testing you will need to keep the process improvements 'organic'. Once some senior manager can see a way to glory then you'll get an inappropriate test platform anyway...

Ideally, you would have an automated test system with a test suite
that you have verified has a high level of test coverage.  For any
single patch, that provides a high ratio of testing for the effort
expended on testing.  But it takes quite a bit of work to get the test
system and suite working well.

without compromise too much of the ability to quickly fix production
issues?

One thing that would speed things up would be, before a worker
attempts to merge the patch to the master, to rebase the patch branch
to the head of the master.  Assuming that nobody merges another patch
first, the worker can verify that the merge causes no conflict, and
then merge the patch branch to the master with a "fast-forward" merge
(which is guaranteed to have no conflict).

A good policy, which implicitly gives sha1 versioning and the option of tagging for human readable naming. And easy rewinding, or reverting.


Dale

--
Philip
--
You received this message because you are subscribed to the Google Groups "Git for 
human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to git-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to