This discussion / argument is no different than what we dealt with in the
1980's.  How do you measure Quality.  During that time I headed up a
software testing group that was working on automated testing (we actually
built our own tool back in the days of (dare I mention it) DOS.  We tried to
measure things like LOC, etc, but the more highly skilled developers did
things in less LOC than lower level folks.  Then there's the conundrum of
simple vs complex changes.  The best we could come up with was the number of
failures (as opposed to faults - refer to Boris Beizer on the difference
between the two) and the cost incurred in correcting a failure that was
released.  The system was an extensively used PC based financial application
used by as large number of clients.  Then we developed a method (since the
system communicated with our mainframe on a regular basis) to download
patches rather than have our regional force have to visit the client and
provide the updates.  Cost went way down on the cost to correct which then
skewed that metric.  Basically this is a metric that has been struggled with
since the inception of computers and seems to be a moving target based on
the technology, complexity of the system and cost to correct issues. 

 

Kind of goes back to that Fast, Accurate, Cheap concept - if it's in house
software vs a widely (ok so with the advent of 'cloud' this also has
changed) - if the developer is a salaried employee vs a consultant (ie fixed
cost or not) - how critical are the code changes from a data POV (ie the
risk if something goes wrong) - kind of reminds me of my days of Cost
Accounting in College - hurts to think about that stuff.

 

When I was working on project plans for major releases I always had folks
tell me that a developer can't estimate projects based on what they think it
would take THEM to do (as most developers that build project plans are
usually the more skilled folks) but what it would take an 'normal' person to
do the task.

 

I imagine the struggle will continue long past Remedy and technology as we
know it.

 

From: Action Request System discussion list(ARSList)
[mailto:[email protected]] On Behalf Of Danny Kellett
Sent: Thursday, June 5, 2014 4:55 AM
To: [email protected]
Subject: Re: Remedy Developer Performance Metrics

 

** 

Quantity != Quality

 

--

Danny Kellett

[email protected] <mailto:[email protected]> 

 

 

 

On Thu, Jun 5, 2014, at 09:31 AM, Theo Fondse wrote:

**

Hi Charlie!

Although I have grown over the past few years to fully agree with LJ on this
subject, I can also understand the need for metrics to have something to go
by to know if our performance is on-par or not.

Sadly, measurements in the LOC style no longer give a true picture of actual
performance especially in terms of Remedy development.

The company I am currently working for has a 100% custom Remedy solution,
and are measuring performance based on number of requests closed per day
irrespective of workflow object count (but they include all types of
incidents, problems and change requests in this figure). 

In my opinion, this is a better performance metric than pure LOC count, but
is also flawed because some types of requests are quicker and easier to
close than others.

Shawn Pierson nailed it very well in his mail, if the purpose of the
exercise is to determine the "quality" of a Remedy developer or to guide
decisions around which questions your metrics should answer if you want your
company to keep the Remedy developer that will truly be most beneficial to
keep when the time comes to make those "hard decisions".

Dave Shellman also pointed out the efficiency of code argument. 

I would like to add to what he said by pointing out that the better Remedy
Developer will: 

 1) Add config data to the system to configure it to do something rather
than write superfluous/duplicate code. 

 2) Pre-allocate field ID's and share Filters/Active Links between multiple
forms. 

These effectively lowers your LOC count and therefore LOC count does not
paint a true picture of quality or quantity of performance in such cases.

Server and network infrastructure performance also plays a role in developer
performance. 

If you are working on a server that takes 38 seconds just to open an active
link, you cannot be expected to churn hundreds of active links a day.

Anyone will be able to (intentionally or unintentionally) exploit LOC-based
metrics to their advantage by bloating their code, simply by: 

 1) Adding a number of extra Filters or Active links in stead of making
something data configurable or generic.

 2) Copy-Pasting Filters or Active Links rather than pre-allocating
fieldID's and sharing workflow between forms.

 3) Writing bloat-up Filters/Active Links that only seem to be doing
something relevant, but has no consequence to the actual functionality.

But, to answer your original question:

If the company insistence remains on measuring performance on an LOC basis,
and if 

 1) You are guaranteed to always be presented with complete, clear, signed
off and sufficient requirement specification documentation,

 2) You are guaranteed to have access to properly performing infrastructure
(active link opens up in <3s),

 3) You are focussed on doing development and are not required to stop what
you are doing and attend to support issues,

 4) You do not attend more than 2 hours worth of meetings or workshops a
week,

 4) Complexity of the solution is low to medium,

 5) Good quality code is required. (As opposed to only evaluating if it
seems to be doing what was required),

 6) There are no external integrations to other systems where you do not
have full admin access and responsibility,

then my opinion is that, on an 8-hour working day, a good Remedy Developer
should be able to produce anywhere between 15 and 100 objects a day counting
a total combination of Forms, Active Links, Filters, Escalations, Guides,
Applications, Web Services, Menus, and Flashboards.

This is based on an approximate average of between ~5 to ~30 minutes time
spent on average per object.

If a Remedy developer is creating more than an average of 100 objects a day,
then that developer is running the risk of probably not ensuring good
quality code, because he/she is: 

 1) Copy-pasting code and not testing it the way it should be tested. 

 2) Unwittingly creating a bloated monster of a system that is going to be a
costly nightmare to maintain. 

In such cases, one could start looking at: 

 1) Synchronising field ID's across al forms.

 2) Writing more generic code that can rather be shared and/or data-driven.

HTH.

Best Regards,

Theo

 

 

On Wed, Jun 4, 2014 at 4:41 PM, Charlie Lotridge <[email protected]
<mailto:[email protected]> > wrote:

**

Hi all,

 

Thanks for all your responses.  And, while I didn't get quite what I was
looking for, it's certainly my own fault for not starting with the more
narrow question I eventually posed.  And even that I should have qualified
by stating "assuming perfectly efficient workflow".

 

I fully agree with all of the positions that the quantity of workflow varies
significantly with the quality of that workflow, the complexity of the
requirements, and many other factors.  I also agree that in isolation,
"workflow object count" is a useless number.  I *do* think that as part of a
broader set of measurable characteristics it can be used to say something
useful about the developer, hopefully to be used constructively.  But this
is a conversation that is diverging significantly from what I was looking
for.

 

LJ, it's unfortunate that the poker point data was so misunderstood and
misused, but I can only imagine that it must have been quite satisfying to
the team that drove that point home with the 1000x formula.

 

I'll take you up on your offer to take this offline.  It might take me a
while to put something together that makes sense, but please expect
something within a day or so.

 

Thanks,

Charlie

 

 

On Wed, Jun 4, 2014 at 7:05 AM, LJ LongWing <[email protected]
<mailto:[email protected]> > wrote:

**

Charlie,

I have a long standing hatred of performance metrics, that I won't go into
the background for here, but I'll attempt to answer the basis of your
question.

 

Where I work currently, we went through an 'Agile transformation' a few
years back.  We all went through training on how to develop in an agile
methodology, we discovered scrum masters, sprints, and all of the
'wonderfulness' of the agile methodology.  During our grooming sessions we
played Agile Poker (http://en.wikipedia.org/wiki/Planning_poker) to estimate
the level of effort of a given change.  The 'points' assigned to the
modification gave an indication of how hard the change would be, and a
'velocity' was set that said...ok, during this sprint we can handle '50'
points of effort, with a sprint typically lasting 2 weeks, it would be
agreed by all parties involved that the team could develop and test those 50
points in that 2 week period...it is typically assumed that given a general
scrum team that the velocity can increase x% each sprint as the team gets
into the groove.

 

This process worked well for awhile until the 'metric' folks got a hold of
these numbers.  The metric folks said ok...well, we will start measuring
teams on performance based on these 'points'.  They started saying that this
team was doing more work than that team because they were handling more
points during a sprint...so one team started taking 3 0's onto the end of
all of their points, they were then doing 1000 times more than any other
team, and it became abundantly clear to the metrics folks that a 'point'
system didn't determine how efficient a team was.

 

Even within my scrum team our point values varied....if I was doing the
work, I would assign the effort a 2 or 3...but if I knew that I wasn't going
to the be the one doing the work, but instead, a junior member of the team,
I would assign it a 5 or an 8 because they would need to do more research
into the system to figure out how to get it done than I would because of my
time on the team and knowledge of the inner workings of the app.

 

The fact that myself and the junior member of the team might generate the
same code, and I would do it faster, doesn't indicate that I'm better than
them, nor necessarily more productive...just have more background than
another.

 

So...this long story is to say that every time I have ever encountered a
performance metric that someone is trying to use to evaluate 'who is
better'...I find that any metric that says 'lines of code per hour' or
'objects per day', etc don't show enough of the picture to properly evaluate
someone.

 

I instead prefer a metric that works on the whole environment/person
instead.  I prefer to look at 'how does the developer interpret
requirements, does the developer ask any questions for clarification, how
efficient is the workflow that is developed, how many defects come back on
the code that is developed, etc.

 

As others have pointed out....400 objects that don't work well are worse
than 20 objects that work well.

 

Other factors that determine a good developer are ability to communicate
with team mates, ability to communicate with management, and ability to
communicate with the customer.  Some people are so 'heads down' that they
might be able to program anything you want, but if you can't articulate your
'needs' to them in a way that they understand, and them get you what you are
looking for back out of that...then they aren't a good developer in certain
situations.

 

I would be happy to take this offline with you if you would like...maybe get
a bit more into your reasons for looking for this metric.

 

 

On Tue, Jun 3, 2014 at 5:03 PM, Charlie Lotridge <[email protected]
<mailto:[email protected]> > wrote:

**

LJ says 'performance metrics suck and don't work the way they are intended'.
So, do you feel strongly about this?  Yikes! ;)

 

Really, though, while I didn't participate or even see any of those prior
conversations about this subject, a couple points occur to me...

 

First, while you're of course entitled to your opinion, I hope your blanket
dismissal of the subject doesn't discourage others from voicing theirs.  If
the topic annoys you - and it seems to - my apologies.  Not my intention.

 

Second, I'd agree that "no one metric can accurately" say anything about
anyone. My "one metric" examples were just given to spur the conversation.
And perhaps others have more nuanced answers that involve more than one
metric and include qualifications.  I'd be interested in hearing about
those.  As a software engineer (my background), one of the metrics that has
been used to judge my work has been "lines of code".  In and of itself it's
not a useful metric, but combine with other factors it can help provide a
broad picture of the performance of different developers.

 

Third, having such data doesn't make it bad or "wrong" data, it depends on
how the data is used just like any other data.  If used constructively, such
metrics could, for example, be used to help assess a developer's strengths
and weaknesses with perhaps the goal of working/educating the developer to
shore up those weaknesses.  And while it's certainly true that information
like this can be misused, it doesn't mean we shouldn't have the
conversation.

 

Fourth, there ARE clear differences in the performance of different
developers.  Sometimes there are very valid reasons to judge the relative
performance of developers.  Sometimes it's because hard choices have to be
made like downsizing.  Is it better in these situations for the manager to
just pick the individual(s) they like the least?  Or who they *think* are
the least productive?  I smell a lawsuit.  Wouldn't hard metrics be useful
in these cases?

 

Finally, a disclaimer: I don't now or have any near future plans to use such
metrics to evaluate anyone...I don't have anyone to evaluate.  And while my
interest in the topic is more than just idle curiosity, I won't be using it
to fire anyone soon.  For me, this information is more for research
purposes.

 

Thanks,

Charlie

 

 

On Tue, Jun 3, 2014 at 3:03 PM, LJ LongWing <[email protected]
<mailto:[email protected]> > wrote:

**

My opinion is that 'performance metrics suck and don't work the way they are
intended'.  There has been healthy debate over the years regarding exactly
that subject, and every time it's happened, either on the list or otherwise,
it ends up being that no one 'metric' can accurately say that this developer
is doing 'better' than another developer.

 

 

On Tue, Jun 3, 2014 at 3:46 PM, Charlie Lotridge <[email protected]
<mailto:[email protected]> > wrote:

**

Hi all,

 

I'm curious...what are your opinions about what might be useful metrics to
use to judge the performance of Remedy developers?  To narrow the
conversation a bit, let's just talk about during the creation of a new
custom application, or custom module to an existing application.  In other
words for code generation.

 

So for example, you might tell me that a good developer can create at least
50 logic objects (active links/filters/escalations) in a day.  Or create &
format one form/day.

 

What are you opinions?

 

Thanks,

Charlie

_ARSlist: "Where the Answers Are" and have been for 20 years_

 

_ARSlist: "Where the Answers Are" and have been for 20 years_

 

_ARSlist: "Where the Answers Are" and have been for 20 years_

 

_ARSlist: "Where the Answers Are" and have been for 20 years_

 

_ARSlist: "Where the Answers Are" and have been for 20 years_

 

_ARSlist: "Where the Answers Are" and have been for 20 years_

_ARSlist: "Where the Answers Are" and have been for 20 years_ 


_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"

Reply via email to