Other things to add:

1. Managing our website is a big pain in the butt.  Why do we need to publish 
PDFs again?  We really need to get on the new CMS.
2. Copying/moving the artifacts to the release area could be automated, too

At the end of the day, #1 below is what strikes me as the biggest impediment to 
releases.

> 
> 
> -----Original Message-----
> From: ext Grant Ingersoll [mailto:[email protected]] 
> Sent: Wednesday, March 30, 2011 8:22 AM
> To: [email protected]
> Subject: Brainstorming on Improving the Release Process
> 
> (Long post, please bear with me and please read!)
> 
> Now that we have the release done (I'm working through the publication 
> process now), I want to start the process of thinking about how we can 
> improve the release process.  As I see it, building the artifacts and 
> checking the legal items are now almost completely automated and testable at 
> earlier stages in the game. 
> 
> We have kept saying we want to release more often, but we have never defined 
> actionable steps with which we can get there.  Goals without actionable steps 
> are useless.
> 
> So, with that in mind, I'd like to brainstorm on how we can improve things a 
> bit more.  Several us acted as RM this time around, so I think we have some 
> common, shared knowledge to take advantage of this time as opposed to in the 
> past where one person mostly just did the release in the background and then 
> we all voted.
> 
> So, let's start with what we have right:
> 
> 1. The Ant process for building a release candidate for both Lucene and Solr 
> is almost identical now and fairly straightforward.
> 2. I think the feature freeze is a good thing, although it is a bit too long 
> perhaps.
> 3. Pretty good documentation on the steps involved to branch, etc.
> 4. The new license validation stuff is a start for enforcing licensing up 
> front more effectively.  What else can we validate up front in terms of 
> packaging?
> 5. We have an awesome test infrastructure now.  I think it is safe to say 
> that this version of Lucene is easily the most tested version we have ever 
> shipped.
> 
> Things I see that can be improved, and these are only suggestions:
> 
> 1.  We need to define the Minimum Effective Dose (MED - 
> http://gizmodo.com/#!5709902/4+hour-body-the-principle-of-the-minimum-effective-dose)
>  for producing a quality release.  Nothing more, nothing less.  I think one 
> of our biggest problems is we don't know when we are done.  It's this 
> loosey-goosey "we all agree" notion, but that's silly.  It's software, we 
> should be able to test almost all of the artifacts for certain attributes and 
> then release when they pass.  If we get something wrong, put in a test for it 
> in the next release.  The old saying about perfect being the enemy of great 
> applies here.
> 
> In other words, we don't have well defined things that we all are looking for 
> when vetting a release candidate, other than what the ASF requires.  Look at 
> the last few vote threads or for any of the previous threads.  It's obvious 
> that we have a large variety of people doing a large variety of things when 
> it comes to testing the candidates.  For instance, I do the following:
>  a. check sigs., md5 hashes, etc. 
>  b. run the demos, 
>  c. run the Solr example and index some content, 
>  d. check over the LICENSE, NOTICE, CHANGES files
>  e. Check the overall packaging, etc. is reasonable
>  f. I run them through my training code
> 
> Others clearly do many other things.  Many of you have your own benchmark 
> tests you run, others read over every last bit of documentation others still 
> put the RC into their own application and test it.   All of this is good, but 
> the problem is it is not _shared_ until the actual RC is up and it is not 
> repeatable (not that all of it can be).  If you have benchmark code/tests 
> that your run on an RC that doesn't involve proprietary code, why isn't it 
> donated to the project so that we can all use it?  That way we don't have to 
> wait until your -1 at the 11th hour to realize the RC is not good.  I 
> personally don't care whether it's python or perl or whatever.  Something 
> that works is better than nothing.  For instance, right now some of the 
> committers have an Apache Extras project going for benchmarking.  Can we get 
> this running on ASF resources on a regular basis?  If it's a computing 
> resource issue, let's go to Infrastructure and ask for resources.  
> Infrastructure has repeatedly said that if a project needs resources to put 
> together a proposal of what you want.  I bet we could get budget to spin up 
> an EC2 instance once a week, run those long running tests (Test2B and other 
> benchmarks) and then report back.  All of that can be automated.
> 
> Also, please think hard about whether the things you test can be automated 
> and built into our test suite or at least run nightly or something on Jenkins 
> and then donating them.  I know reading documentation can't, but what else?  
> For instance, could we auto-generate the file formats documentation?
> 
> 2. We should be running and testing the release packaging process more 
> regularly.
> 
> 3.  I had an epiphany this release and it came via Hoss on a non release 
> related issue where, likely unbeknownst to him, he called me out for not 
> being focused on the release 
> (https://issues.apache.org/jira/browse/SOLR-2366?focusedCommentId=12996154&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996154)
>   I have often been a passive observer when it comes to releasing other than 
> testing the RC (either that or I am the RM, it seems)  It's easy to sit and 
> complain about there not being more frequent releases, but what are you doing 
> about it?  Why are we working on trunk when we are trying to get a release 
> out?  Obviously you are all free to do as you wish, but then don't complain 
> when there aren't more frequent releases.  Yes, I am calling you out just 
> like Hoss called me out.  If you aren't sure how you can help, ask.  Assuming 
> we can better document #1 above, helping should be a lot easier in the future.
> 
> Now, obviously, not everyone can be involved in the actual building of the 
> RC, but that doesn't mean you can't go build it yourself and start testing on 
> it before the Vote comes out.  And it also doesn't mean you can't take a hard 
> look at the process and see if there aren't better ways to do it.  I also 
> think if we all focus more and coordinate our testing by defining test plans 
> up front, we could get releases out in far less time.
> 
> 4. Are those last minute changes really blockers?  We are all perfectionists, 
> I know.  Our guilt gets to us if we happen to know there is something 
> "broken" and we don't fix it.  I think we should add a "Known Issues" section 
> to CHANGES.txt and we should be a bit tougher about keeping out last minute 
> changes.   I was also Guilty as Charged in this case on the highlighter issue 
> that Mark and I were working on.  Thankfully, Robert set me right.  I think 
> that if it is only likely to effect a very small percentage of users AND 
> there is a workaround, then we should simply document it and fix it on trunk 
> and in the branch but not in the release.   If there are too many of these, 
> then we are not ready to release.
> 
> 5.  Most of the steps in http://wiki.apache.org/lucene-java/ReleaseTodo under 
> the Branching & Feature Freeze section can be automated.  Same with the 
> Publication section.
> 
> So, what are some possible actionable steps we can take?
> 
> 1. To address #1, we can publish a test plan just like you would do in your 
> day job.  See what OpenNLP does: 
> https://cwiki.apache.org/confluence/display/OPENNLP/TestPlan1.5.1.  It starts 
> by everyone addressing #1 above on our Wiki.  See 
> http://wiki.apache.org/lucene-java/TestPlans and 
> http://wiki.apache.org/solr/TestPlans for a starting spot.
> 
> 2. To address #2, we can setup a Jenkins job that runs the packaging process 
> regularly and somehow figure out how to reliably test it.        
> 
> 3. Shrugs on #3.  No one can make you participate, but we can make you feel 
> guilty just like Hoss made me feel guilty.  For PMC members specifically, 
> however, _perhaps_ if you don't vote for two or three releases in a row, then 
> you shouldn't be on the PMC.  Your job on the PMC primarily consists of two 
> things: vote on committers, vote on releases.  I can see giving a pass once 
> given life happens.  Two or three in a row means you likely consider yourself 
> emeritus or you just like having the title but not having any of the 
> responsibility.   At a minimum send in a +0 and let us know you have a pulse.
> 
> 4. For #4, this is a collective commitment to make the release happen as 
> quickly as possible while still being correct for most people in most 
> situations.  Focus on the goal.
> 
> 5. For #5, we just need some people with some Ant skills to make it happen.
> 
> 6. If we want to release every 6 months, then let's setup some sort of 
> reminder system that nags us as we get closer or something.  Let's also be 
> honest with ourselves about what features are in the release and what are 
> not.  We don't have to have jam everything in all the time.
> 
> OK, enough of my ideas.  And I do stress that they are only 
> ideas/observations.  What do others think?  Several of you have lived the RM 
> pain now, let's take a good hard look at what can be done better so that we 
> can release more often.
> 
> -Grant
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
> 

--------------------------
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem docs using Solr/Lucene:
http://www.lucidimagination.com/search


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to