Philippe, thanks for the comments. A few further clarifications...
+ So I think I am confused about whether we are tracking just the
desktop project milestones or the overall product suite we are
releasing (Desktop, Server, Hosted Service)? The milestones and
definitions are just for the desktop right? Maybe we feel the desktop
milestones will be the long pole and that's ok. I think I was trying
to find a way to bring in the other pieces because ultimately the
product isn't done until all that stuff is complete - this is our
launch date. We could choose not to track (at the same level) feature
freeze and feature complete milestones for Cosmo, docs and hosted
service if people don't think that's an issue. I certainly don't want
to bombard people with more info and metrics than are necessary :-).
I was only trying to be helpful in addressing the concerns.
+ The development milestones and exit criteria are important in
evaluating where we are in the release. I would argue that tracking
workflows are equally important and relevant to everyone. In the end
the finish line around Preview will be defined by these and the
workflows inform what ends up in our bug list in the first place
(performance being a bit more complicated). This is the way QA tests
(and we use the application in dogfooding scenarios) and I think it's
particularly helpful to see stuff like (we have 15 bugs to fix this
workflow vs 2 bugs for this one). We can then address issues like -
should we bother supporting a certain workflow which implies punting
or not punting a series of bugs. It gives us some perspective about
how broken something is and the effort to fix it. Workflows span
across projects (Desktop, Server, Hosted Service) which give us the
added value of tracking the dependencies and making sure we aren't
forgetting anything.
+ Ultimately I feel bug prioritization is based on the user
experience and should be from a product perspective - understanding
that development cost and risk is also a significant factor as well
as the development swags (engineering reality). I see the priorities
as follows.
P1 Playing around with the app, trying out the features
P2 Stuff that blocks day to day usage
P3 Stuff that confuses people
P4/5 Stuff that makes it more likely for people to stick with
Chandler (conveniences)
Tagging these as crash, dataloss, interop, workarounds etc is
certainly useful but I don't think these help us accurately reflect
our priorities. Not all crash bugs are of equal importance ie: a
crash everytime I create and event vs some obscure set of workflows
that can be handled by a relnote. Interop too has more meaning when
it's applied to whether or not someone can use the app at all (import
their data) or simply share a calendar on Cosmo. Performance is
harder but we have been prioritizing performance this way already
with a set of problematic workflows based on what people find the
most annoying. I think defining a consistent prioritization scheme is
really important for getting us to the finish line and helps us focus
on what's absolutely necessary.
Cheers,
Sheila
On Feb 28, 2007, at 11:09 PM, Philippe Bossut wrote:
Sheila Mooney wrote:
-> We could also track some sub-milestone criteria such as the
following....
+ What features have we done?
+ What features are left to do?
+ What workflows are complete?
+ What workflows are incomplete?
+ What's been tested?
+ What hasn't been tested?
+ What non-code deliverables are complete?
+ What non-code deliverables are incomplete?
I think it's useful for individual contributors to identify such
sub-milestones for themselves but, for the overall project, this is
not extremely useful as only a couple of persons at a time would be
on hook for any such micro granular milestone. I think at the
project level we should pace ourselves with criteria that are
applicable to all and are relevant to all.
Tracking workflows helps me as well since this makes it easier to
punt stuff. We could decide to defer a complete workflow which
entails punting a group of bugs across multiple features.
I think workflows are a way to consider a product which are
orthogonal to the way a product is architectured so they are not
very well suited to identify milestones.
-> The other thing I heard is that we could also agree on our
ultimate end-user goals and set those expectations properly. Do we
want real users or just people trying out the app to give us
feedback? A proposal...
*Dogfood Goals:*
Get people outside of OSAF to dogfood Chandler the way Mitch,
Sheila, Priss and Mimi have been dogfooding:
+ Using Sharing
+ Using the Dashboard
+ Using the Calendar
+ Using Edit/Update
+1, we need real users, even for a limited set of features. As long
as we don't get there, we're nowhere.
-> Also, giving people some visibility into how we are going to
prioritize/punt bugs is also worthwhile so that they know what
categories the P1s and P2s fit into and they understand we aren't
gold-plating but fixing very core problems. Then we can make
decisions like "punt all bugs in the convenience category" or say
that anything which doesn't make it into those buckets gets
deferred automatically. (We may need an additional bucket for
developer/build/qa related stuff - this is just an example).
*Buckets for prioritizing features and bugs:*
+ Playing around with the app, trying out the features
+ Stuff that blocks day to day usage
+ Stuff that confuses people
+ Stuff that makes it more likely for people to stick with
Chandler (conveniences)
+1, I would use more stringent categories though like:
+ crash
+ dataloss
+ broken functionality, no workaround
+ broken functionality, workaround available
+ interop
+ cosmetic
We can use the keyword in bugzilla to mark bugs (some of those
categories already have keywords) and make clear which categories
of bugs are being considered for each debug period.
Cheers,
- Philippe
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Design" mailing list
http://lists.osafoundation.org/mailman/listinfo/design
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Design" mailing list
http://lists.osafoundation.org/mailman/listinfo/design