I’ll throw out the idea that another factor is whether the developer can find 
an OOB solution rather than code it (in the case of ITSM at least – custom apps 
are different), or to be ‘minimally invasive’ into OOB code and leverage what 
is already there.

 

From: Action Request System discussion list(ARSList) 
[mailto:[email protected]] On Behalf Of Charlie Lotridge
Sent: Wednesday, June 4, 2014 10:42 AM
To: [email protected]
Subject: Re: Remedy Developer Performance Metrics

 

** 

Hi all,

 

Thanks for all your responses.  And, while I didn't get quite what I was 
looking for, it's certainly my own fault for not starting with the more narrow 
question I eventually posed.  And even that I should have qualified by stating 
"assuming perfectly efficient workflow".

 

I fully agree with all of the positions that the quantity of workflow varies 
significantly with the quality of that workflow, the complexity of the 
requirements, and many other factors.  I also agree that in isolation, 
"workflow object count" is a useless number.  I *do* think that as part of a 
broader set of measurable characteristics it can be used to say something 
useful about the developer, hopefully to be used constructively.  But this is a 
conversation that is diverging significantly from what I was looking for.

 

LJ, it's unfortunate that the poker point data was so misunderstood and 
misused, but I can only imagine that it must have been quite satisfying to the 
team that drove that point home with the 1000x formula.

 

I'll take you up on your offer to take this offline.  It might take me a while 
to put something together that makes sense, but please expect something within 
a day or so.

 

Thanks,
Charlie

 

On Wed, Jun 4, 2014 at 7:05 AM, LJ LongWing <[email protected] 
<mailto:[email protected]> > wrote:

** 

Charlie,

I have a long standing hatred of performance metrics, that I won't go into the 
background for here, but I'll attempt to answer the basis of your question.

 

Where I work currently, we went through an 'Agile transformation' a few years 
back.  We all went through training on how to develop in an agile methodology, 
we discovered scrum masters, sprints, and all of the 'wonderfulness' of the 
agile methodology.  During our grooming sessions we played Agile Poker 
(http://en.wikipedia.org/wiki/Planning_poker) to estimate the level of effort 
of a given change.  The 'points' assigned to the modification gave an 
indication of how hard the change would be, and a 'velocity' was set that 
said...ok, during this sprint we can handle '50' points of effort, with a 
sprint typically lasting 2 weeks, it would be agreed by all parties involved 
that the team could develop and test those 50 points in that 2 week period...it 
is typically assumed that given a general scrum team that the velocity can 
increase x% each sprint as the team gets into the groove.

 

This process worked well for awhile until the 'metric' folks got a hold of 
these numbers.  The metric folks said ok...well, we will start measuring teams 
on performance based on these 'points'.  They started saying that this team was 
doing more work than that team because they were handling more points during a 
sprint...so one team started taking 3 0's onto the end of all of their points, 
they were then doing 1000 times more than any other team, and it became 
abundantly clear to the metrics folks that a 'point' system didn't determine 
how efficient a team was.

 

Even within my scrum team our point values varied....if I was doing the work, I 
would assign the effort a 2 or 3...but if I knew that I wasn't going to the be 
the one doing the work, but instead, a junior member of the team, I would 
assign it a 5 or an 8 because they would need to do more research into the 
system to figure out how to get it done than I would because of my time on the 
team and knowledge of the inner workings of the app.

 

The fact that myself and the junior member of the team might generate the same 
code, and I would do it faster, doesn't indicate that I'm better than them, nor 
necessarily more productive...just have more background than another.

 

So...this long story is to say that every time I have ever encountered a 
performance metric that someone is trying to use to evaluate 'who is 
better'...I find that any metric that says 'lines of code per hour' or 'objects 
per day', etc don't show enough of the picture to properly evaluate someone.

 

I instead prefer a metric that works on the whole environment/person instead.  
I prefer to look at 'how does the developer interpret requirements, does the 
developer ask any questions for clarification, how efficient is the workflow 
that is developed, how many defects come back on the code that is developed, 
etc.

 

As others have pointed out....400 objects that don't work well are worse than 
20 objects that work well.

 

Other factors that determine a good developer are ability to communicate with 
team mates, ability to communicate with management, and ability to communicate 
with the customer.  Some people are so 'heads down' that they might be able to 
program anything you want, but if you can't articulate your 'needs' to them in 
a way that they understand, and them get you what you are looking for back out 
of that...then they aren't a good developer in certain situations.

 

I would be happy to take this offline with you if you would like...maybe get a 
bit more into your reasons for looking for this metric.

 

On Tue, Jun 3, 2014 at 5:03 PM, Charlie Lotridge <[email protected] 
<mailto:[email protected]> > wrote:

** 

LJ says 'performance metrics suck and don't work the way they are intended'.  
So, do you feel strongly about this?  Yikes! ;)

 

Really, though, while I didn't participate or even see any of those prior 
conversations about this subject, a couple points occur to me...

 

First, while you're of course entitled to your opinion, I hope your blanket 
dismissal of the subject doesn't discourage others from voicing theirs.  If the 
topic annoys you - and it seems to - my apologies.  Not my intention.

 

Second, I'd agree that "no one metric can accurately" say anything about 
anyone. My "one metric" examples were just given to spur the conversation. And 
perhaps others have more nuanced answers that involve more than one metric and 
include qualifications.  I'd be interested in hearing about those.  As a 
software engineer (my background), one of the metrics that has been used to 
judge my work has been "lines of code".  In and of itself it's not a useful 
metric, but combine with other factors it can help provide a broad picture of 
the performance of different developers.

 

Third, having such data doesn't make it bad or "wrong" data, it depends on how 
the data is used just like any other data.  If used constructively, such 
metrics could, for example, be used to help assess a developer's strengths and 
weaknesses with perhaps the goal of working/educating the developer to shore up 
those weaknesses.  And while it's certainly true that information like this can 
be misused, it doesn't mean we shouldn't have the conversation.

 

Fourth, there ARE clear differences in the performance of different developers. 
 Sometimes there are very valid reasons to judge the relative performance of 
developers.  Sometimes it's because hard choices have to be made like 
downsizing.  Is it better in these situations for the manager to just pick the 
individual(s) they like the least?  Or who they *think* are the least 
productive?  I smell a lawsuit.  Wouldn't hard metrics be useful in these cases?

 

Finally, a disclaimer: I don't now or have any near future plans to use such 
metrics to evaluate anyone...I don't have anyone to evaluate.  And while my 
interest in the topic is more than just idle curiosity, I won't be using it to 
fire anyone soon.  For me, this information is more for research purposes.

 

Thanks,

Charlie

 

On Tue, Jun 3, 2014 at 3:03 PM, LJ LongWing <[email protected] 
<mailto:[email protected]> > wrote:

** 

My opinion is that 'performance metrics suck and don't work the way they are 
intended'.  There has been healthy debate over the years regarding exactly that 
subject, and every time it's happened, either on the list or otherwise, it ends 
up being that no one 'metric' can accurately say that this developer is doing 
'better' than another developer.

 

On Tue, Jun 3, 2014 at 3:46 PM, Charlie Lotridge <[email protected] 
<mailto:[email protected]> > wrote:

** 

Hi all,

 

I'm curious...what are your opinions about what might be useful metrics to use 
to judge the performance of Remedy developers?  To narrow the conversation a 
bit, let's just talk about during the creation of a new custom application, or 
custom module to an existing application.  In other words for code generation.

 

So for example, you might tell me that a good developer can create at least 50 
logic objects (active links/filters/escalations) in a day.  Or create & format 
one form/day.

 

What are you opinions?

 

Thanks,
Charlie

_ARSlist: "Where the Answers Are" and have been for 20 years_ 

 

_ARSlist: "Where the Answers Are" and have been for 20 years_ 

 

_ARSlist: "Where the Answers Are" and have been for 20 years_ 

 

_ARSlist: "Where the Answers Are" and have been for 20 years_ 

 

_ARSlist: "Where the Answers Are" and have been for 20 years_ 


_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"

Reply via email to