Let me clarify point 5 of my comments.
> 5: We need a stable objective quality assessment mechanism from which
> we can observe trends.

In MobileAware we have an ongoing platform evaluation cycle. We record
the hardware upon which the tests are performed and then we run a
series of stress tests to measure efficiency. The latter is done with
debugging off to measure typical deployed behaviour. We then compare
the results over time (weeks, months) against similar deployment
platforms and scenarios. Then comes some spreadsheets and charts.
Sometimes you discover that an added feature/bugfix causes a change in
performance (good or bad) beyond what you expected, and this
information is very useful to help you track down errors (or to
discover a better way of doing things). If beehive could establish a
reference deployment platform (or a set of them) and measure
performance over time, we might start to learn something.

I've also considered doing a simple code analysis. In fact, I started
doing this as part of the recent BEA TechTalk roadshow in which I was
guest presenter. Just some simple stuff, like "how many source files",
"how many loops/conditions in the code" etc. Right now I'm not sure how
much data I can gather or how. For now, I'm just putting together a few
Perl scripts to analyse the code. Later, however, I hope to go back
over the data and see if I can discover some corelations. For example,
I suspect there may a quality-related corelation between lines per
method, comments per method and some other factors. There might also be
a relationship between recorded bugs and factors such as average
nesting depth, number of variables, number of polymorphic methods per
class etc.

I have two weeks of W3C-related work to resolve first, but after that I
hope to wiki the code analysis. If anyone in the Community has some
thoughts on this, I'd welcome some debate.


---Rotan


-----Original Message-----
From: Heather Stephens [mailto:[EMAIL PROTECTED]
Sent: 06 October 2004 00:23
To: Beehive Developers
Subject: RE: [proposal] Beehive release strategy


Thanks for the input Rotan.

My thoughts on some of your items below.

Are Rotan and I really the only ones with opinions?  It would be great
to hear others' input too.  *hint, hint*

>> 3:  If I have version X.y.z, will there be an easy way for
>> me to determine the feature set?

Yes.  Good point.  I think we have to have that.  I'd like to writeup a
document for our feature planning process that would include both how
somebody could put a feature on the docket, how we target them to
specific releases and how we indicate to each other that a given feature
is complete and ready to be used.  Obviously using Jira for much of this
would really help.  :)

>> 4:  'When appropriate, cut a "fix pack"...' needs clarification.
>> Will there be a set of unambiguous criteria against which one can
>> ascertain whether or not the time is 'appropriate' to cut?
>> 5: We need a stable objective quality assessment mechanism from which
we
>> can observe trends.

Agreed.  I was struggling with how to put some language around that.
Essentially IMO we don't want to cut a fix pack for every patch, i.e.,
people who want more immediate service can take a daily build from that
branch after the fix is checked in.  But I assume we also wouldn't want
to wait too long because we could then end up in a situation where we're
supporting people on many different builds rather than on *real*
releases.

I also felt like each release would take on a life of its own and have
different needs in regards to support.  For example our V1 may have more
fix packs than a V3.  Perhaps we could put in a trigger to assess
whether or not we need a "fix pack" every month?  Or perhaps when a
given branch has 10 P1 fixes queued up?  Your thoughts?

For the short term I'm assuming tests must pass in order to checkin and
thus the quality would remain at essentially the same level as the
release.  As we add more tests to the suites, this obviously won't
scale.  I'll add a note for the near-term situation and am interested in
how others see our test processes working as we grow.

-----Original Message-----
From: Rotan Hanrahan [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 21, 2004 5:21 PM
To: [email protected]
Subject: FW: [proposal] Beehive release strategy

I'm sure the Dev team would be happy to contribute additional comments
on the proposed Beehive release strategy, so as requested I'm sending
"this" to "there".
---Rotan

        -----Original Message----- 
        From: Heather Stephens [mailto:[EMAIL PROTECTED] 
        Sent: Tue 21/09/2004 22:16 
        To: Rotan Hanrahan; [EMAIL PROTECTED] 
        Cc: 
        Subject: RE: [proposal] Beehive release strategy
        
        

        Hey Rotan-
        
        This is all great feedback and seem like good discussion items
and
        questions for the team.  It doesn't seem like there is anything
        particularly private or sensitive in this email and so I think
it would
        be great if we could have this discussion in a more public forum
on
        beehive-dev.  Would you mind sending this there to open it up to
the
        larger community?
        
        H.
        
        -----Original Message-----
        From: Rotan Hanrahan [mailto:[EMAIL PROTECTED]
        Sent: Friday, September 17, 2004 9:52 AM
        To: [EMAIL PROTECTED]
        Subject: RE: [proposal] Beehive release strategy
        
        Quick feedback:
        
        0: Looks good.
        
        1: Is there any way to validate that a successful unit test
sequence was
        executed?
        
        In effect, I'm wondering if there's a way to prevent check-in
*unless*
        the tests have passed.
        
        2: Is there a code tidy process?
        
        This is a sweeper task that one or more people do. Look at code
and tidy
        the comments or layout according to style rules we agree in
advance.
        Ambiguous comments get referred to the author for clarification.
This
        might sound like a minor task, but if we have a large community
and not
        all are native speakers of the comment language (ie english)
then
        someone has to make sure it is clear and makes sense. Preferably
good
        coders with good communication skills. It also provides an
avenue for
        contributions that may not be code mods, but would still be very
useful
        to those who do the actual coding.
        
        3: If I have version X.y.z, will there be an easy way for me to
        determine the feature set?
        
        4: 'When appropriate, cut a "fix pack"...' needs clarification.
        
        Will there be a set of unambiguous criteria against which one
can
        ascertain whether or not the time is 'appropriate' to cut?
        
        5: We need a stable objective quality assessment mechanism from
which we
        can observe trends.
        
        For example, we could agree a hardware and o.s. reference
environment,
        and then run an agreed set of tests on this platform, measurings
key
        statistics as we go. Over time we will obtain some objective
performance
        quality trends. We might then be able to sync a feature/bug
introduction
        to a change in performance (+/-), which in turn would suggest an
        inspection of that code (to fix or to learn).
        
        
        Regards,
        ---Rotan.
        
        
        -----Original Message-----
        From: Heather Stephens [mailto:[EMAIL PROTECTED]
        Sent: 14 September 2004 00:22
        To: [EMAIL PROTECTED]
        Subject: FW: [proposal] Beehive release strategy
        
        
        FYI.  Feedback appreciated.
        
        -----Original Message-----
        From: Heather Stephens
        Sent: Monday, September 13, 2004 4:20 PM
        To: Beehive Developers
        Subject: [proposal] Beehive release strategy
        
        Hi all-
        
        I've been putting some thought into a release strategy we might
use for
        Beehive.  http://wiki.apache.org/beehive/Release_20Process
        
        Please take some time to review and assess it as the Beehive
general
        release model.  If you would raise any concerns or suggest
        revisements/refinements on this alias for further discussion
that would
        be fabulous.
        
        Timeline goal:
        9/19/04:  Close on discussion and resolve any issues
        9/20/04:  Finalize proposal and send to a vote at the PPMC
        
        Cheers.
        Heather Stephens
        
        
        
        


Reply via email to