A good start for metrics could be the number of bugs/issues raised against a 
development effort and the number of occurrences of the same behavior over a 
period of time. 

Regards,
Roney Samuel Varghese. 
Sent from my iPhone

> On Jun 5, 2014, at 8:50 AM, JD Hood <hood...@gmail.com> wrote:
> 
> **
> The thought of trying to measure something as fungible as development -- 
> given that there is usually more than "nine way to skin a cat" in Remedy -- 
> tends to make me believe the idea stems from a manager with far too much time 
> on their hands. 
> 
> And as far as measuring development, I would think you would have to give 
> excruciatingly exact development tasks to each of your developers (and why 
> would you do that?) in order to collect meaningful metrics. Off the top of my 
> head, presuming you could come up with requirements so finely detailed that 
> there was only a single way to develop them, you could perhaps use this as 
> part of a job-applicant screening process -or- perhaps as part of the 
> employee's annual review. But if the devs have to take a test as part of 
> their annual review, I would think it only fair that the managers have to 
> take a test as well, one designed by the developers.
> 
> But as a production development measurement process, how would even a fairly 
> "general" measurement be meaningful if during the measurement period, 
> developer "A" mostly just added/repositioned/edited text fields with minimal 
> workflow and developer "B" worked on a robust, non-ITSM, bespoke application? 
> 
> I would think you would also need a measurement system for (including, but 
> not limited to):
> - The individual staff members contributing/defining requirements
> - The quality/completeness/ambiguity of the requirements gathered/documented
> - Any changes along the way
> - The over/under/confusion-injecting involvement of non-technical 
> stake-holders and managers.
> 
> Given those measurements, along with the developers metrics, I would think 
> the outcome would quite often recommend reassigning the metrics seeking 
> micro-manager(s) to somewhere they can't do as much damage... 
> 
> Now, I'm not an anti-management rebel as the above may suggest - I rely on 
> and have worked with darned good managers and continue to do so today. 
> However,  over the years I've worked with lots of managers good and bad, both 
> with my employers and customers. So when I hear someone say that they are 
> looking for a way to measure development or capture development metrics, I 
> don't see how you could get **meaningful** metrics simply because development 
> efforts are rarely identical enough to compare one to the next fairly 
> (individual scale or project scale). It strikes me as an exercise in futility 
> -- or better yet, gathering metrics for the sake of gathering metrics. Again, 
> as if the idea stems from a manager with far too much time on their hands who 
> sees a new excel spreadsheet full of raw numbers like a Rubik's Cube: 
> "something fun to fiddle with". 
> 
> My point of view on this stems from my personal experience where I've worked 
> with folks who use the ITIL "measurement" philosophy as something to hide 
> behind in order to measure waaaaaay too much just because they can and not 
> necessarily because there is a clear *business-need*.
> 
> I would wager that you would realize far better productivity, along with a 
> substantial boost in morale, if you were to just get rid of the manager who 
> suggested the idea in the first place. For if that manager seriously 
> suggested this idea, who knows what other "great ideas" he has inflicted on 
> the organization?
> 
> Note: This is merely my stinky personal opinion -and- opinions will likely 
> vary...
> -JDHood
> 
> 
> 
> 
>> On Thu, Jun 5, 2014 at 4:31 AM, Theo Fondse <theo.fon...@gmail.com> wrote:
>> **
>> Hi Charlie!
>> 
>> Although I have grown over the past few years to fully agree with LJ on this 
>> subject, I can also understand the need for metrics to have something to go 
>> by to know if our performance is on-par or not.
>> Sadly, measurements in the LOC style no longer give a true picture of actual 
>> performance especially in terms of Remedy development.
>> The company I am currently working for has a 100% custom Remedy solution, 
>> and are measuring performance based on number of requests closed per day 
>> irrespective of workflow object count (but they include all types of 
>> incidents, problems and change requests in this figure). 
>> In my opinion, this is a better performance metric than pure LOC count, but 
>> is also flawed because some types of requests are quicker and easier to 
>> close than others.
>> 
>> Shawn Pierson nailed it very well in his mail, if the purpose of the 
>> exercise is to determine the "quality" of a Remedy developer or to guide 
>> decisions around which questions your metrics should answer if you want your 
>> company to keep the Remedy developer that will truly be most beneficial to 
>> keep when the time comes to make those "hard decisions".
>> 
>> Dave Shellman also pointed out the efficiency of code argument. 
>> I would like to add to what he said by pointing out that the better Remedy 
>> Developer will: 
>>  1) Add config data to the system to configure it to do something rather 
>> than write superfluous/duplicate code. 
>>  2) Pre-allocate field ID's and share Filters/Active Links between multiple 
>> forms. 
>> These effectively lowers your LOC count and therefore LOC count does not 
>> paint a true picture of quality or quantity of performance in such cases.
>> 
>> Server and network infrastructure performance also plays a role in developer 
>> performance. 
>> If you are working on a server that takes 38 seconds just to open an active 
>> link, you cannot be expected to churn hundreds of active links a day.
>> 
>> Anyone will be able to (intentionally or unintentionally) exploit LOC-based 
>> metrics to their advantage by bloating their code, simply by: 
>>  1) Adding a number of extra Filters or Active links in stead of making 
>> something data configurable or generic.
>>  2) Copy-Pasting Filters or Active Links rather than pre-allocating 
>> fieldID's and sharing workflow between forms.
>>  3) Writing bloat-up Filters/Active Links that only seem to be doing 
>> something relevant, but has no consequence to the actual functionality.
>> 
>> But, to answer your original question:
>> If the company insistence remains on measuring performance on an LOC basis, 
>> and if 
>>  1) You are guaranteed to always be presented with complete, clear, signed 
>> off and sufficient requirement specification documentation,
>>  2) You are guaranteed to have access to properly performing infrastructure 
>> (active link opens up in <3s),
>>  3) You are focussed on doing development and are not required to stop what 
>> you are doing and attend to support issues,
>>  4) You do not attend more than 2 hours worth of meetings or workshops a 
>> week,
>>  4) Complexity of the solution is low to medium,
>>  5) Good quality code is required. (As opposed to only evaluating if it 
>> seems to be doing what was required),
>>  6) There are no external integrations to other systems where you do not 
>> have full admin access and responsibility,
>> then my opinion is that, on an 8-hour working day, a good Remedy Developer 
>> should be able to produce anywhere between 15 and 100 objects a day counting 
>> a total combination of Forms, Active Links, Filters, Escalations, Guides, 
>> Applications, Web Services, Menus, and Flashboards.
>> This is based on an approximate average of between ~5 to ~30 minutes time 
>> spent on average per object.
>> 
>> If a Remedy developer is creating more than an average of 100 objects a day, 
>> then that developer is running the risk of probably not ensuring good 
>> quality code, because he/she is: 
>>  1) Copy-pasting code and not testing it the way it should be tested. 
>>  2) Unwittingly creating a bloated monster of a system that is going to be a 
>> costly nightmare to maintain. 
>> In such cases, one could start looking at: 
>>  1) Synchronising field ID's across al forms.
>>  2) Writing more generic code that can rather be shared and/or data-driven.
>> 
>> HTH.
>> 
>> Best Regards,
>> Theo
>> 
>> 
>> 
>>> On Wed, Jun 4, 2014 at 4:41 PM, Charlie Lotridge <lotri...@mcs-sf.com> 
>>> wrote:
>>> **
>>> Hi all,
>>> 
>>> Thanks for all your responses.  And, while I didn't get quite what I was 
>>> looking for, it's certainly my own fault for not starting with the more 
>>> narrow question I eventually posed.  And even that I should have qualified 
>>> by stating "assuming perfectly efficient workflow".
>>> 
>>> I fully agree with all of the positions that the quantity of workflow 
>>> varies significantly with the quality of that workflow, the complexity of 
>>> the requirements, and many other factors.  I also agree that in isolation, 
>>> "workflow object count" is a useless number.  I *do* think that as part of 
>>> a broader set of measurable characteristics it can be used to say something 
>>> useful about the developer, hopefully to be used constructively.  But this 
>>> is a conversation that is diverging significantly from what I was looking 
>>> for.
>>> 
>>> LJ, it's unfortunate that the poker point data was so misunderstood and 
>>> misused, but I can only imagine that it must have been quite satisfying to 
>>> the team that drove that point home with the 1000x formula.
>>> 
>>> I'll take you up on your offer to take this offline.  It might take me a 
>>> while to put something together that makes sense, but please expect 
>>> something within a day or so.
>>> 
>>> Thanks,
>>> Charlie
>>> 
>>> 
>>>> On Wed, Jun 4, 2014 at 7:05 AM, LJ LongWing <lj.longw...@gmail.com> wrote:
>>>> **
>>>> Charlie,
>>>> I have a long standing hatred of performance metrics, that I won't go into 
>>>> the background for here, but I'll attempt to answer the basis of your 
>>>> question.
>>>> 
>>>> Where I work currently, we went through an 'Agile transformation' a few 
>>>> years back.  We all went through training on how to develop in an agile 
>>>> methodology, we discovered scrum masters, sprints, and all of the 
>>>> 'wonderfulness' of the agile methodology.  During our grooming sessions we 
>>>> played Agile Poker (http://en.wikipedia.org/wiki/Planning_poker) to 
>>>> estimate the level of effort of a given change.  The 'points' assigned to 
>>>> the modification gave an indication of how hard the change would be, and a 
>>>> 'velocity' was set that said...ok, during this sprint we can handle '50' 
>>>> points of effort, with a sprint typically lasting 2 weeks, it would be 
>>>> agreed by all parties involved that the team could develop and test those 
>>>> 50 points in that 2 week period...it is typically assumed that given a 
>>>> general scrum team that the velocity can increase x% each sprint as the 
>>>> team gets into the groove.
>>>> 
>>>> This process worked well for awhile until the 'metric' folks got a hold of 
>>>> these numbers.  The metric folks said ok...well, we will start measuring 
>>>> teams on performance based on these 'points'.  They started saying that 
>>>> this team was doing more work than that team because they were handling 
>>>> more points during a sprint...so one team started taking 3 0's onto the 
>>>> end of all of their points, they were then doing 1000 times more than any 
>>>> other team, and it became abundantly clear to the metrics folks that a 
>>>> 'point' system didn't determine how efficient a team was.
>>>> 
>>>> Even within my scrum team our point values varied....if I was doing the 
>>>> work, I would assign the effort a 2 or 3...but if I knew that I wasn't 
>>>> going to the be the one doing the work, but instead, a junior member of 
>>>> the team, I would assign it a 5 or an 8 because they would need to do more 
>>>> research into the system to figure out how to get it done than I would 
>>>> because of my time on the team and knowledge of the inner workings of the 
>>>> app.
>>>> 
>>>> The fact that myself and the junior member of the team might generate the 
>>>> same code, and I would do it faster, doesn't indicate that I'm better than 
>>>> them, nor necessarily more productive...just have more background than 
>>>> another.
>>>> 
>>>> So...this long story is to say that every time I have ever encountered a 
>>>> performance metric that someone is trying to use to evaluate 'who is 
>>>> better'...I find that any metric that says 'lines of code per hour' or 
>>>> 'objects per day', etc don't show enough of the picture to properly 
>>>> evaluate someone.
>>>> 
>>>> I instead prefer a metric that works on the whole environment/person 
>>>> instead.  I prefer to look at 'how does the developer interpret 
>>>> requirements, does the developer ask any questions for clarification, how 
>>>> efficient is the workflow that is developed, how many defects come back on 
>>>> the code that is developed, etc.
>>>> 
>>>> As others have pointed out....400 objects that don't work well are worse 
>>>> than 20 objects that work well.
>>>> 
>>>> Other factors that determine a good developer are ability to communicate 
>>>> with team mates, ability to communicate with management, and ability to 
>>>> communicate with the customer.  Some people are so 'heads down' that they 
>>>> might be able to program anything you want, but if you can't articulate 
>>>> your 'needs' to them in a way that they understand, and them get you what 
>>>> you are looking for back out of that...then they aren't a good developer 
>>>> in certain situations.
>>>> 
>>>> I would be happy to take this offline with you if you would like...maybe 
>>>> get a bit more into your reasons for looking for this metric.
>>>> 
>>>> 
>>>>> On Tue, Jun 3, 2014 at 5:03 PM, Charlie Lotridge <lotri...@mcs-sf.com> 
>>>>> wrote:
>>>>> **
>>>>> LJ says 'performance metrics suck and don't work the way they are 
>>>>> intended'.  So, do you feel strongly about this?  Yikes! ;)
>>>>> 
>>>>> Really, though, while I didn't participate or even see any of those prior 
>>>>> conversations about this subject, a couple points occur to me...
>>>>> 
>>>>> First, while you're of course entitled to your opinion, I hope your 
>>>>> blanket dismissal of the subject doesn't discourage others from voicing 
>>>>> theirs.  If the topic annoys you - and it seems to - my apologies.  Not 
>>>>> my intention.
>>>>> 
>>>>> Second, I'd agree that "no one metric can accurately" say anything about 
>>>>> anyone. My "one metric" examples were just given to spur the 
>>>>> conversation. And perhaps others have more nuanced answers that involve 
>>>>> more than one metric and include qualifications.  I'd be interested in 
>>>>> hearing about those.  As a software engineer (my background), one of the 
>>>>> metrics that has been used to judge my work has been "lines of code".  In 
>>>>> and of itself it's not a useful metric, but combine with other factors it 
>>>>> can help provide a broad picture of the performance of different 
>>>>> developers.
>>>>> 
>>>>> Third, having such data doesn't make it bad or "wrong" data, it depends 
>>>>> on how the data is used just like any other data.  If used 
>>>>> constructively, such metrics could, for example, be used to help assess a 
>>>>> developer's strengths and weaknesses with perhaps the goal of 
>>>>> working/educating the developer to shore up those weaknesses.  And while 
>>>>> it's certainly true that information like this can be misused, it doesn't 
>>>>> mean we shouldn't have the conversation.
>>>>> 
>>>>> Fourth, there ARE clear differences in the performance of different 
>>>>> developers.  Sometimes there are very valid reasons to judge the relative 
>>>>> performance of developers.  Sometimes it's because hard choices have to 
>>>>> be made like downsizing.  Is it better in these situations for the 
>>>>> manager to just pick the individual(s) they like the least?  Or who they 
>>>>> *think* are the least productive?  I smell a lawsuit.  Wouldn't hard 
>>>>> metrics be useful in these cases?
>>>>> 
>>>>> Finally, a disclaimer: I don't now or have any near future plans to use 
>>>>> such metrics to evaluate anyone...I don't have anyone to evaluate.  And 
>>>>> while my interest in the topic is more than just idle curiosity, I won't 
>>>>> be using it to fire anyone soon.  For me, this information is more for 
>>>>> research purposes.
>>>>> 
>>>>> Thanks,
>>>>> Charlie
>>>>> 
>>>>> 
>>>>>> On Tue, Jun 3, 2014 at 3:03 PM, LJ LongWing <lj.longw...@gmail.com> 
>>>>>> wrote:
>>>>>> **
>>>>>> My opinion is that 'performance metrics suck and don't work the way they 
>>>>>> are intended'.  There has been healthy debate over the years regarding 
>>>>>> exactly that subject, and every time it's happened, either on the list 
>>>>>> or otherwise, it ends up being that no one 'metric' can accurately say 
>>>>>> that this developer is doing 'better' than another developer.
>>>>>> 
>>>>>> 
>>>>>>> On Tue, Jun 3, 2014 at 3:46 PM, Charlie Lotridge <lotri...@mcs-sf.com> 
>>>>>>> wrote:
>>>>>>> **
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> I'm curious...what are your opinions about what might be useful metrics 
>>>>>>> to use to judge the performance of Remedy developers?  To narrow the 
>>>>>>> conversation a bit, let's just talk about during the creation of a new 
>>>>>>> custom application, or custom module to an existing application.  In 
>>>>>>> other words for code generation.
>>>>>>> 
>>>>>>> So for example, you might tell me that a good developer can create at 
>>>>>>> least 50 logic objects (active links/filters/escalations) in a day.  Or 
>>>>>>> create & format one form/day.
>>>>>>> 
>>>>>>> What are you opinions?
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Charlie
>>>>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>>>> 
>>>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>>> 
>>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>> 
>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>> 
>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>> 
>> _ARSlist: "Where the Answers Are" and have been for 20 years_
> 
> _ARSlist: "Where the Answers Are" and have been for 20 years_

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"

Reply via email to