I keen to try it! Sounds interesting.

Paul

On Thu, Jul 17, 2008 at 6:34 PM, Tim Veil <[EMAIL PROTECTED]> wrote:
> I'm up for it.
>
> On Thu, Jul 17, 2008 at 1:28 PM, Alan D. Cabrera <[EMAIL PROTECTED]>
> wrote:
>
>> I have a crazy idea that I have always wanted to try out just for fun.  If
>> you don't want to participate feel free to ignore this thread since this
>> experiment will be non-binding.
>>
>> This is not the first time that I've seen disagreement over feature sets
>> and priorities.  I'm sure it's happened to all of us.  There's a technique
>> that I use to make the issues plain.
>>
>> When one evaluates a solution, they usually have a set of criteria used to
>> perform the evaluation and so each solution gets a certain score depending
>> on how well it meets that criteria.
>>
>> S_i = sum_j C_j,i
>>
>> But of course, there's usually no agreement on how well each solution meets
>> that criteria.  What has worked well in the past is to average everyone's
>> criteria assessment.
>>
>> C_j_i = average_p C_j,i,p
>>
>> And also there usually is no agreement on what criteria is relevant and so
>> we let everyone submit their criteria and then we weight each one by an
>> average of how much people think it's relevant.
>>
>> W_j = average W_j,p
>>
>> So the solution gets a score of
>>
>> S_i = sum W_j * C_j,i
>>
>> It would be interesting to see what we get with regards to logging.  Any
>> one care to try this experiment?
>>
>>
>> Regards,
>> Alan
>>
>>
>



-- 
Paul Fremantle
Co-Founder and CTO, WSO2
Apache Synapse PMC Chair
OASIS WS-RX TC Co-chair

blog: http://pzf.fremantle.org
[EMAIL PROTECTED]

"Oxygenating the Web Service Platform", www.wso2.com

Reply via email to