On 10/09/2012 03:02 PM, Hugh Brock wrote:
On Tue, Oct 09, 2012 at 09:53:17AM -0400, Mike Orazi wrote:
On 10/05/2012 10:15 AM, Hugh Brock wrote:
On Fri, Oct 05, 2012 at 09:04:13AM -0500, Steve Linabery wrote:
Hi aeolus community,
The current upstream development sprint officially ends on Thursday,
11-Oct-2012.
In an ideal world all patches would be on list by COB on Monday. All patches
should be reviewed and pushed by Tuesday. This gives us a little breathing room
for testing before the demo.
Thank you in advance for your cooperation!
Steve|eggs
Hmmm... Have we forgotten about the week we said we were going to give
QE to test the upstream code before the demos? I thought that was a
permanent part of the process now.
--Hugh
I have to apologize here. I asked Steve to coordinate and send this
notification out so we could close the door on the present sprint
and start again.
While we have been striving to have QE to be more involved in each
upstream sprint priorities dictated that most of the the usual
suspects spend their time in other areas this sprint. As those
commitments appear to be wrapping up, I think we'll see a lot more
involvement in the next several sprints.
Overall though, I think this really does speak to the idea that the
quality of the project is a shared responsibility and it is really
important that everyone takes a little of the burden on themselves
to make sure we maintain working, testable, tested code.
OK. No problem this time, but: I think it is really, really important
that in future sprints we maintain the idea of having a freeze well
before the end of the sprint (5-7 days) to allow for testing by whomever
is charged with testing. I think this is a key part of having working
software at the end of the sprint... and if we don't have working
software at the end of the sprint, then we are doing something wrong.
Having a whole week dedicated to testing at the end of every sprint
doesn't sound very Agile. Can we work with QA throughout the sprint and
continuously test?
How about automating as much of the QA process as possible? This way we
could us something like Travis CI and catch any issues early on in the
sprint?
--Hugh