In my world - no one looks over a programmer's shoulder to figure out exactly what bug she is working on today. If she wants a break by fixing easier bugs (whatever her definition of easier is), then she does it, and if she is paranoid, doesn't check them in until it's their time. It makes her look slow on the early ones and REALLY fast on the later ones (not that this is tracked particularly well). - bugs are ordered by priority, which is somewhat independent of hard to fix. Bugs get fixed ASAP because they either cause serious problems or because they have the potential to mask other bugs (so they interfere with further testing). I've seen plenty of P1 bugs be fixed in 15 minutes.
Realistically: - some bugs have to be fixed now because they keep someone else from getting work done - there are the bugs that are likely to be fixed before the product ships and those which aren't. Of the former set, other than those mentioned in my first category, developers ought to fix bugs in the order that works for them (if while fixing bug 1, you noticed how a simple change would also fix bug 999, would you wait until it got to the top of the queue?). In general, they should work in priority order, but you do get more productivity when people have control of their work (a guy with a cold may only want to fix non-critical bugs today, because he fears that his congested head will make errors). I can't imagine, though, scheduling a mix of easy and hard bugs. If you did enough work to discover the bug is easy to fix, why not fix it during the discovery process? If you wanted to test this, - you'd need a pretty large sample; working with a team over time to allow for enough variation in "what to fix today". - you'd really have to take the same code and have different people fix bugs under different schemes. I don't see a good measure to use that would allow for teams using different methods to use different code bases. - the dependent measure would have to be something like the overall quality of the code. Maybe as a really controlled experiment, you could measure the (weighted) total number of bugs fixed as well as some measure of programmer satisfaction (though it's clear how that would come out, so I'm not sure what you would learn by measuring it.) Robin Arman Anwar wrote: > > Greetings, > > I have the following observations: > > 1. when we have a set of bugs of varying criticality and complexity > 2. fixing easy bugs does add to the team morale and thus productivity > 3. unfortunately it becomes a difficult business case because it is > difficult to justify prioritising simple fixies when there are more > criticle and difficult fixes waiting to be addressed. > > The bottom line I think is that a software build should have a healthy > mix of hard and soft problems. Any existing body of work that address > this ? > > If not how would one measure this. > > Thanks, > > Arman. > > ---------------------------------------------------------------------- > PPIG Discuss List ([EMAIL PROTECTED]) > Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss > Announce admin: http://limitlessmail.net/mailman/listinfo/announce > PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/ ---------------------------------------------------------------------- PPIG Discuss List ([EMAIL PROTECTED]) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
