I have a crazy idea that I have always wanted to try out just for fun. If you don't want to participate feel free to ignore this thread since this experiment will be non-binding.

This is not the first time that I've seen disagreement over feature sets and priorities. I'm sure it's happened to all of us. There's a technique that I use to make the issues plain.

When one evaluates a solution, they usually have a set of criteria used to perform the evaluation and so each solution gets a certain score depending on how well it meets that criteria.

S_i = sum_j C_j,i

But of course, there's usually no agreement on how well each solution meets that criteria. What has worked well in the past is to average everyone's criteria assessment.

C_j_i = average_p C_j,i,p

And also there usually is no agreement on what criteria is relevant and so we let everyone submit their criteria and then we weight each one by an average of how much people think it's relevant.

W_j = average W_j,p

So the solution gets a score of

S_i = sum W_j * C_j,i

It would be interesting to see what we get with regards to logging. Any one care to try this experiment?


Regards,
Alan

Reply via email to