Re: bodhi statistics

2010-06-10 Thread Josh Boyer
On Thu, Jun 10, 2010 at 04:59:30AM +0200, Kevin Kofler wrote:
Luke Macken wrote:
 Neither of you have mentioned your definition of the word success.
 Care to enlighten us?

Success is the achievement of a worthwhile goal. If the original goal which 
was set is worthless, succeeding at it is meaningless.

There are a number of people who disagree with the value of that goal, including
most of the past and current FESCo.

It is very clear you don't agree with the goal.  That in and of itself does
not make it worthless.  Luke's changes to bodhi are also not meaningless.  He
is implementing towards the goal/process he was given and he's doing it rather
well.

You are devaluing the work of a Fedora contributor and being generally
poisonous.  I suggest you stop this particular line of attack.

josh
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-10 Thread Kevin Kofler
Josh Boyer wrote:
 There are a number of people who disagree with the value of that goal,
 including most of the past and current FESCo.

You conveniently ignored the part of the mail where I pointed out WHY that 
goal is broken. Why are we trying to emulate the failed Fedora Legacy 
process rather than the successful Fedora Extras one? Why can't we learn 
from our past?

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-10 Thread Josh Boyer
On Thu, Jun 10, 2010 at 05:10:36PM +0200, Kevin Kofler wrote:
Josh Boyer wrote:
 There are a number of people who disagree with the value of that goal,
 including most of the past and current FESCo.

You conveniently ignored the part of the mail where I pointed out WHY that 
goal is broken.

No, that was intentional.  I was commenting directly on your abuse of someone
that was _told_ to implement towards a specific goal.  Your questioning of that
goal, while personally annoying to me, is certainly acceptable.  Your assertion
that someone's hard work and time on an implementation of that goal is
worthless and meaningless is not.  It's hurtful, poisonous and entirely
unwelcome.

josh
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-10 Thread Emmanuel Seyman
* Kevin Kofler [10/06/2010 17:21] :

 Why are we trying to emulate the failed Fedora Legacy 
 process rather than the successful Fedora Extras one? Why can't we learn 
 from our past?

I think you're rather quick to conclude that Extras succeded because it
allowed maintainers to push to stable directly and that Legacy failed for
the lack of same.

Others have given other reasons for Legacy's failure (these have ranged
to lack of infrastructure to lack of community involvement to conspiracy
theories involving Red Hat), so if stringent QA requirements really were
the cause for Legacy's failure, it's probably best to demonstrate this
before using it as an arguement in your assertions.

Emmanuel

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-10 Thread Kevin Fenzi
On Thu, 10 Jun 2010 04:59:30 +0200
Kevin Kofler kevin.kof...@chello.at wrote:

  Now, if the policies that are being approved do not actually
  benefit the greater good of the community, we have bigger problems.
 
 Yet that's exactly the problem we're having. :-(

snip...

 So why are we now going to use the Fedora Legacy model over the
 Fedora Extras one? Does FESCo really want Fedora to fail?

I think you are presenting a false dichotomy here. 

There are many reasons why the current fedora is dissimilar to both 
Fedora extras and Fedora Legacy. I think it's worth trying to give our
users a more stable updates stream. If it turns out that we don't get
this from the things we are trying to do, we will learn from it and try
something else. 

I think we have had this discussion before, so I will leave it at that. 

kevin


signature.asc
Description: PGP signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Luke Macken
On Tue, 2010-06-08 at 16:51 -0400, Luke Macken wrote:
 
  Fedora 13
 
 
  * 231 updates automatically pushed due to karma (6.49%)
* 2 of which were critical path updates

So I thought that this last metric would take into account critical path
updates that were approved under the new policy, but it did not.  The
actual metric is for Fedora 13 is:

  * 333 critical path updates approved


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Luke Macken
On Tue, 2010-06-08 at 21:20 -0400, Matthias Clasen wrote:
 On Tue, 2010-06-08 at 16:51 -0400, Luke Macken wrote:
 
  
   Fedora 13
  
  
   * 3562 updates
   * 3065 stable updates
   * 427 testing updates
   * 62 pending updates
   * 8 obsolete updates
 
 Hey Luke, 
 
 are these the numbers of F13 updates since we branched, or since the F13
 release ? The numbers certainly look like the former, but I think the
 latter would be much more interesting.

These numbers are since we branched for F13.

luke

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Kevin Kofler
Luke Macken wrote:
 By success I mean that I felt we were successful in drafting,
 implementing, deploying, and utilizing the mentioned policies as
 expected, and the results show increased community engagement.

This definition of success does not match mine nor the one you'll find in 
a dictionary. So your terminology is misleading.

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Luke Macken
On Wed, 2010-06-09 at 08:38 +0200, Kevin Kofler wrote:
 Luke Macken wrote:
  By success I mean that I felt we were successful in drafting,
  implementing, deploying, and utilizing the mentioned policies as
  expected, and the results show increased community engagement.
 
 This definition of success does not match mine nor the one you'll find in 
 a dictionary. So your terminology is misleading.

Really, Kevin?  We're digressing to a dictionary battle?

Fine, I'll play.  First definition in the dictionary[0]: an event that
accomplishes its intended purpose.

...which is exactly what I meant to being with.

luke

[0]: http://www.google.com/search?q=define%3Asuccess

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Ralf Corsepius
On 06/09/2010 08:54 AM, Luke Macken wrote:
 On Wed, 2010-06-09 at 08:38 +0200, Kevin Kofler wrote:
 Luke Macken wrote:
 By success I mean that I felt we were successful in drafting,
 implementing, deploying, and utilizing the mentioned policies as
 expected, and the results show increased community engagement.

 This definition of success does not match mine nor the one you'll find in
 a dictionary. So your terminology is misleading.

 Really, Kevin?  We're digressing to a dictionary battle?

 Fine, I'll play.  First definition in the dictionary[0]: an event that
 accomplishes its intended purpose.
Exactly. Your definition differs from Kevin's (and mine).

 ...which is exactly what I meant to being with.

To me, your definition of success is compliance with *your* process.

Whether this process is suitable to improve package quality, whether the 
technical system behind it is a good approach and whether your approach 
actually improves package quality or is mere bureaucray is highly 
questionable.

That said, all you demonstrated is your system not being entirely 
broken, but I don't see any success related to QA in your statistic.

Ralf

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Marcela Mašláňová
On 06/08/2010 10:51 PM, Luke Macken wrote:
 I recently wrote some code to generate detailed statistics of Fedora  EPEL 
 updates within bodhi. Eventually this will be auto-generated and exposed 
 within bodhi itself, but for now here are the initial metrics.

 This report definitely conveys the shortcomings in our testing, however, it 
 does show us improving with each release. For Fedora 13, we implemented the 
 No Frozen Rawhide process with improved Critical Path policies, which were 
 definitely a success. With these enhanced procedures, along with the upcoming 
 implementation of AutoQA and the new Package update acceptance criteria 
 (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I think 
 we'll see these numbers drastically improve in the future.

I can't agree that this update policy is success (in any dictionary). 
Since I'm forced to wait
two weeks for pushing into stable, I have more tickets about packages 
that I've already fixed. Users want fixes immediately, they are not 
interested in some processes. Many users don't even have FAS account and 
they want fixes and updates now. Not in next release as offer enterprise 
solution.
Other thing is that I'm forgetting packages in bodhi, because I believed 
until yesterday that updates will be pushed after two weeks 
automatically. Not sure whether policy changed or it's a bug [1].

Karma in bodhi updates doesn't speak about quality of updates at all. At 
least my packages which has more dependencies are pushed without 
testing, and less important are tested even testers don't know what 
does it do [2]. For example here were testers happy because it was 
updated without any warning message, but they don't know if it's working 
or not.

I'm looking forward to qa project that could really improve quality - 
like testing updates, which fixed bugs with bugzilla and testers really 
reproduce them before and not after. Otherwise is bodhi good only for 
checking update by yum without errors.

Best regards,
Marcela

[1] https://admin.fedoraproject.org/updates/perl-LDAP-0.40-1.fc13
[2] https://admin.fedoraproject.org/updates/perl-version-0.82-1.fc13
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Michael Schwendt
On Tue, 8 Jun 2010 16:51:36 -0400 (EDT), Luke wrote:

 =
 Bodhi Statistics Report (Generated on June 8th, 2010)
 =
 
 Out of 17412 total updates, 2958 received feedback (16.99%)
 Out of 1045 total unique karma submitters, the top 30 are:

Just for the record and some fairness, the top Fedora 11 testers
according to https://admin.fedoraproject.org/updates/metrics/?release=F11

  alexlan
  mschwendt
  mrunge
  dsd
  dcottle
  lmacken
  mooninite
  cwickert

from times when the fedora-easy-karma script wasn't available and
didn't make it possible to mass-vote on updates.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Till Maas
On Tue, Jun 08, 2010 at 04:51:36PM -0400, Luke Macken wrote:
 You can find the code that generates these statistics here: 
 https://fedorahosted.org/bodhi/browser/bodhi/tools/metrics.py 
 https://fedorahosted.org/bodhi/browser/bodhi/tools/log_stats.py. If you have 
 any ideas or suggestions for different types of metrics to generate, or if 
 you find any bugs in my code, please let me know. 

  * 595 unique authenticated karma submitters

I think it might be interesting to calculate this number for different
timeframes, e.g. last day, last week, last month, since it was released,
since F(N+1) was released. Maybe also the number of karma submitters
that submitted at least X comments for each week the release is
available. This would imho allow more precise to estimate how many
active testers are there.

Regards
Till


pgp1oy9tUDVfk.pgp
Description: PGP signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Josh Boyer
On Wed, Jun 09, 2010 at 10:58:15AM +0530, Rahul Sundaram wrote:
On 06/09/2010 10:48 AM, Kevin Kofler wrote:
 Stephen John Smoogen wrote:
   
 Well the only person I see mentioning quality is Kevin. And for some
 reason he is expecting it immediately
 
 You can't claim that there are shortcomings in our testing (the exact 
 words Luke used!) without a metric of quality. A shortcoming in testing 
 means less testing was done than needed. If the packages have good quality, 
 that means more testing is NOT needed, no matter what the actual amount of 
 testing was.
   

Packages without any testing is of unknown quality.   If someone doesn't
test them before pushing, users are unknowingly playing that role. 

That is true if you are making the assumption that the package maintainer did
no testing themselves.  I would hope that isn't the common case.

josh
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Josh Boyer
On Wed, Jun 09, 2010 at 09:51:59AM +0200, Michael Schwendt wrote:
On Tue, 8 Jun 2010 16:51:36 -0400 (EDT), Luke wrote:

 =
 Bodhi Statistics Report (Generated on June 8th, 2010)
 =
 
 Out of 17412 total updates, 2958 received feedback (16.99%)
 Out of 1045 total unique karma submitters, the top 30 are:

Just for the record and some fairness, the top Fedora 11 testers
according to https://admin.fedoraproject.org/updates/metrics/?release=F11

  alexlan
  mschwendt
  mrunge
  dsd
  dcottle
  lmacken
  mooninite
  cwickert

from times when the fedora-easy-karma script wasn't available and
didn't make it possible to mass-vote on updates.

Thanks for this.  I was wondering about exactly that.

josh
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Rahul Sundaram
On 06/09/2010 05:12 PM, Josh Boyer wrote:
 That is true if you are making the assumption that the package maintainer did
 no testing themselves.  I would hope that isn't the common case.
   

There are well known cases of that happening.  Kevin has been public
about his position on that.   Hopefully we can automatically catch and
prevent the obvious breakages soon.

Rahul
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Kevin Kofler
On Wednesday 09 June 2010, Rahul Sundaram wrote:
 There are well known cases of that happening.  Kevin has been public
 about his position on that.   Hopefully we can automatically catch and
 prevent the obvious breakages soon.

For the record, I did test (on one release) the latest bunch of updates I 
pushed before pushing them out to stable. But sometimes a fix is very urgent 
and trivial so I'll just take anyone's word that it works (or if the change is 
really really trivial, e.g. a fixed Requires which can be verified to be 
satisfied just by looking at the Rawhide report and noticing there's no broken 
dependency, not even that) and issue it directly to stable (and I think that's 
a good thing and the new process which will make this a PITA is going to 
degrade our overall package quality by delaying urgent fixes). And for stuff 
like the KDE updates, I know many people, including other KDE SIG members, 
test them, so I usually don't spend my time testing those updates as well, the 
testing feedback we get is largely sufficient.

Kevin Kofler
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Kevin Kofler
Marcela Mašláňová wrote:
 I can't agree that this update policy is success (in any dictionary).
 Since I'm forced to wait
 two weeks for pushing into stable, I have more tickets about packages
 that I've already fixed. Users want fixes immediately, they are not
 interested in some processes. Many users don't even have FAS account and
 they want fixes and updates now. Not in next release as offer enterprise
 solution.

Exactly my point. But FESCo just won't listen. :-(

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Seth Vidal



On Wed, 9 Jun 2010, Kevin Kofler wrote:


Marcela Mašláňová wrote:

I can't agree that this update policy is success (in any dictionary).
Since I'm forced to wait
two weeks for pushing into stable, I have more tickets about packages
that I've already fixed. Users want fixes immediately, they are not
interested in some processes. Many users don't even have FAS account and
they want fixes and updates now. Not in next release as offer enterprise
solution.


Exactly my point. But FESCo just won't listen. :-(



FESCo listened. They did not agree with you.

-sv
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Luke Macken
On Wed, 2010-06-09 at 09:35 +0200, Marcela Mašláňová wrote:
 On 06/08/2010 10:51 PM, Luke Macken wrote:
  I recently wrote some code to generate detailed statistics of Fedora  EPEL 
  updates within bodhi. Eventually this will be auto-generated and exposed 
  within bodhi itself, but for now here are the initial metrics.
 
  This report definitely conveys the shortcomings in our testing, however, it 
  does show us improving with each release. For Fedora 13, we implemented the 
  No Frozen Rawhide process with improved Critical Path policies, which were 
  definitely a success. With these enhanced procedures, along with the 
  upcoming implementation of AutoQA and the new Package update acceptance 
  criteria 
  (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I 
  think we'll see these numbers drastically improve in the future.
 

 I can't agree that this update policy is success (in any dictionary). 

If you can't agree that we, as a community, were successful in
accomplishing our intended purpose of implementing the No Frozen Rawhide
 Critical Path package processes that we drafted, then you're not
understanding the use of the word.

Again, the update policy that is currently rubber stamped does not
match what is enforced by the process.  The package update acceptance
criteria[0] has been approved, but not yet implemented.  So, if you
feel like you have suggestions or some sort of constructive criticism to
offer, I recommend bringing it up with FESCo.

[0]: https://fedoraproject.org/wiki/Package_update_acceptance_criteria

 Since I'm forced to wait two weeks for pushing into stable

For EPEL updates, yes, that is their policy.  For Fedora, there is
nothing stopping you from pushing your non-critpath updates straight to
stable.

 Other thing is that I'm forgetting packages in bodhi, because I believed 
 until yesterday that updates will be pushed after two weeks 
 automatically. Not sure whether policy changed or it's a bug [1].

Bodhi has never done this, ever.

According to the new acceptance critera, updates will have to spend
some minimum amount of time in updates-testing, currently one week.
Now, as to whether or not bodhi should auto-push after that week, that
I'm not quite sure.

 Karma in bodhi updates doesn't speak about quality of updates at all. At 
 least my packages which has more dependencies are pushed without 
 testing, and less important are tested even testers don't know what 
 does it do [2]. For example here were testers happy because it was 
 updated without any warning message, but they don't know if it's working 
 or not.

Right, karma does not equate quality.  I don't think anyone ever claimed
that it did?

 I'm looking forward to qa project that could really improve quality - 
 like testing updates, which fixed bugs with bugzilla and testers really 
 reproduce them before and not after. Otherwise is bodhi good only for 
 checking update by yum without errors.

Yes, I'm looking forward to AutoQA as well.  We all are.

The majority of karma is this ran without exploding, which is far from
sufficient testing, but it's still valuable information.

luke

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Adam Miller
Luke's dictionary is more correct than yours.

 anyone else see how horrid the line I just wrote sounded in your head
when you read it? That's what this thread sounds like. Did we really need to
take some raw numbers that Luke was kind enough to put together and make it
into some sort of QA methods holy war?

-AdamM (From Android)

On Jun 9, 2010 2:11 AM, Ralf Corsepius rc040...@freenet.de wrote:

On 06/09/2010 08:54 AM, Luke Macken wrote:
 On Wed, 2010-06-09 at 08:38 +0200, Kevin Kofler wrote:
...
Exactly. Your definition differs from Kevin's (and mine).


 ...which is exactly what I meant to being with.
To me, your definition of success is compliance with *your* process.

Whether this process is suitable to improve package quality, whether the
technical system behind it is a good approach and whether your approach
actually improves package quality or is mere bureaucray is highly
questionable.

That said, all you demonstrated is your system not being entirely
broken, but I don't see any success related to QA in your statistic.

Ralf


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listin...
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: bodhi statistics

2010-06-09 Thread Luke Macken
On Wed, 2010-06-09 at 09:10 +0200, Ralf Corsepius wrote:
 On 06/09/2010 08:54 AM, Luke Macken wrote:
  On Wed, 2010-06-09 at 08:38 +0200, Kevin Kofler wrote:
  Luke Macken wrote:
  By success I mean that I felt we were successful in drafting,
  implementing, deploying, and utilizing the mentioned policies as
  expected, and the results show increased community engagement.
 
  This definition of success does not match mine nor the one you'll find in
  a dictionary. So your terminology is misleading.
 
  Really, Kevin?  We're digressing to a dictionary battle?
 
  Fine, I'll play.  First definition in the dictionary[0]: an event that
  accomplishes its intended purpose.
 Exactly. Your definition differs from Kevin's (and mine).

Neither of you have mentioned your definition of the word success.
Care to enlighten us?

  ...which is exactly what I meant to being with.
 
 To me, your definition of success is compliance with *your* process.

If by your process you mean the processes created by the Fedora
Community.  I have had almost no say in the new updates criteria, nor
am I on any rubber-stamping committee to approve it.  I am, however, one
of the *few* developers who actually works on bodhi, and I have a vested
interested in improving it for the greater good of the community.  Now,
if the policies that are being approved do not actually benefit the
greater good of the community, we have bigger problems.

 Whether this process is suitable to improve package quality, whether the 
 technical system behind it is a good approach and whether your approach 
 actually improves package quality or is mere bureaucray is highly 
 questionable.

Yes, all of those are highly questionable, with regard to this quality
metric.

To improve a process we must first observe how it is currently being
utilized.

What comes out of bodhi is what the maintainers put into it. 
Aside from that, we've been essentially been using it to croud-source
QA.  As expected, this is far from perfect, but it's a start until we
have code in place that can perform rigorous and comprehensive testing.

 That said, all you demonstrated is your system not being entirely 
 broken, but I don't see any success related to QA in your statistic.

The numbers show an increase in community interaction.  More eyes are
looking at the updates and providing feedback.  Considering we're
croud-sourcing QA, an increase in participation is called a success.

luke

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Jeff Spaleta
On Wed, Jun 9, 2010 at 8:31 AM, Adam Miller
maxamill...@fedoraproject.org wrote:
 Did we really need to
 take some raw numbers that Luke was kind enough to put together and make it
 into some sort of QA methods holy war?

The lesson here is that for data mining to make sense there must a
consensus understanding of the question being asked.  I'm not sure we
have that here.  I think the numbers are interesting in showing that
bodhi can be used as a metric tool.. but I'm not sure we have
consensus about what questions to be asking it.  The strong reaction
to the interpretive prose is just a symptom of that.  Luke is asking
bodhi questions about the trends of participation.  I think some
people have misread that as surrogate for a broader question about
overall package quality.

-jef
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Przemek Klosowski
On 06/09/2010 01:18 AM, Kevin Kofler wrote:
 If the packages have good quality, that means more testing is NOT
 needed, no matter what the actual amount of testing was.

Apart from the Bodhi issue, I disagree with the logic of your statement. 
Quality doesn't exist (or at least is not provable) in an absence of 
testing; as the philosopher DeNiro says in Ronin, If there is any 
doubt, there is no doubt.

On the contrary, there's the testing philosophy that says that if your 
tests start showing less faults, you should _increase_  and diversify 
testing, because you have simply exhausted previous test coverage profile.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Michael Schwendt
On Wed, 09 Jun 2010 12:03:50 -0400, Luke wrote:

 According to the new acceptance critera, updates will have to spend
 some minimum amount of time in updates-testing, currently one week.
 Now, as to whether or not bodhi should auto-push after that week, that
 I'm not quite sure.

Rest assured that you are NOT expected to auto-push anything unless
packagers can opt-in or opt-out from such a process.

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Adam Williamson
On Tue, 2010-06-08 at 16:51 -0400, Luke Macken wrote:
 I recently wrote some code to generate detailed statistics of Fedora  EPEL 
 updates within bodhi. Eventually this will be auto-generated and exposed 
 within bodhi itself, but for now here are the initial metrics.
 
 This report definitely conveys the shortcomings in our testing, however, it 
 does show us improving with each release. For Fedora 13, we implemented the 
 No Frozen Rawhide process with improved Critical Path policies, which were 
 definitely a success. With these enhanced procedures, along with the upcoming 
 implementation of AutoQA and the new Package update acceptance criteria 
 (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I think 
 we'll see these numbers drastically improve in the future.
 
 You can find the code that generates these statistics here: 
 https://fedorahosted.org/bodhi/browser/bodhi/tools/metrics.py 
 https://fedorahosted.org/bodhi/browser/bodhi/tools/log_stats.py. If you have 
 any ideas or suggestions for different types of metrics to generate, or if 
 you find any bugs in my code, please let me know. 

Great stuff Luke, very interesting, and pleasing to see the improvements
in the f13 process in cold hard numbers - thanks!
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Adam Williamson
On Wed, 2010-06-09 at 09:51 +0200, Michael Schwendt wrote:

 from times when the fedora-easy-karma script wasn't available and
 didn't make it possible to mass-vote on updates.

easy-karma doesn't allow you to mass vote, you still have to vote on
each update individually. it simply streamlines the process somewhat, so
you don't spend most of the time poking through the Bodhi web interface.

just thought I'd note it...
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Kevin Kofler
Luke Macken wrote:
 Neither of you have mentioned your definition of the word success.
 Care to enlighten us?

Success is the achievement of a worthwhile goal. If the original goal which 
was set is worthless, succeeding at it is meaningless.

 Now, if the policies that are being approved do not actually benefit the
 greater good of the community, we have bigger problems.

Yet that's exactly the problem we're having. :-(

We're being pushed towards more and more bureaucracy such as FORCED use of 
updates-testing without drawing any lessons from our project history:
* Fedora Extras had no testing requirements, in fact it didn't even HAVE a 
testing repository. All builds were pushed directly to the stable 
repository. It worked great, to the point where Core merged with it.
* Fedora Legacy had stringent QA requirements very similar to the ones which 
are about to be enforced now: updates could not move out of testing without 
either a minimum amount of positive feedback or a timeout (which had to be 
introduced because otherwise packages would never move out of testing). 
Feedback was counted separately for each distro version, just as our new 
policy will do it, which made it nearly impossible to get the required 
positive feedback for some releases. Over time, the amount of required 
positive feedback and the timeout had to be reduced several times because 
the system was just not working. In the end, Fedora Legacy failed, because 
it was impossible to deliver security updates in a timely manner with that 
kind of QA requirements.

So why are we now going to use the Fedora Legacy model over the Fedora 
Extras one? Does FESCo really want Fedora to fail?

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Kevin Kofler
Adam Williamson wrote:
 Users also want regressions not to happen

This is exactly why we need quick fixes, i.e. direct stable pushes: to be 
able to push a fixed update IMMEDIATELY if somebody caught a regression.

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-09 Thread Adam Williamson
On Thu, 2010-06-10 at 05:03 +0200, Kevin Kofler wrote:
 Adam Williamson wrote:
  Users also want regressions not to happen
 
 This is exactly why we need quick fixes, i.e. direct stable pushes: to be 
 able to push a fixed update IMMEDIATELY if somebody caught a regression.

We danced that tango last time, I'm not getting baited into it again.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


bodhi statistics

2010-06-08 Thread Luke Macken
I recently wrote some code to generate detailed statistics of Fedora  EPEL 
updates within bodhi. Eventually this will be auto-generated and exposed within 
bodhi itself, but for now here are the initial metrics.

This report definitely conveys the shortcomings in our testing, however, it 
does show us improving with each release. For Fedora 13, we implemented the No 
Frozen Rawhide process with improved Critical Path policies, which were 
definitely a success. With these enhanced procedures, along with the upcoming 
implementation of AutoQA and the new Package update acceptance criteria 
(https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I think 
we'll see these numbers drastically improve in the future.

You can find the code that generates these statistics here: 
https://fedorahosted.org/bodhi/browser/bodhi/tools/metrics.py 
https://fedorahosted.org/bodhi/browser/bodhi/tools/log_stats.py. If you have 
any ideas or suggestions for different types of metrics to generate, or if you 
find any bugs in my code, please let me know. 

luke

=
Bodhi Statistics Report (Generated on June 8th, 2010)
=

Out of 17412 total updates, 2958 received feedback (16.99%)
Out of 1045 total unique karma submitters, the top 30 are:
 * notting (424)
 * mclasen (366)
 * jkeating (321)
 * adamwill (283)
 * cwickert (161)
 * rdieter (159)
 * pbrobinson (141)
 * kevin (141)
 * cweyl (122)
 * tomspur (119)
 * mtasaka (110)
 * xake (97)
 * cschwangler (86)
 * kwright (84)
 * peter (83)
 * hadess (80)
 * michich (72)
 * tagoh (69)
 * pfrields (69)
 * bpepple (69)
 * iarnell (68)
 * lkundrak (66)
 * shinobi (65)
 * sundaram (64)
 * spot (62)
 * pravins (62)
 * markmc (62)
 * thomasj (61)
 * smooge (60)
 * fab (59)


 Fedora 13


 * 3562 updates
 * 3065 stable updates
 * 427 testing updates
 * 62 pending updates
 * 8 obsolete updates
 * 2371 bugfix updates (66.56%)
 * 745 enhancement updates (20.92%)
 * 89 security updates (2.50%)
 * 357 newpackage updates (10.02%)
 * 410 critical path updates (11.51%)
 * 1155 updates received feedback (32.43%)
 * 12120 +0 comments
 * 2477 +1 comments
 * 155 -1 comments
 * 595 unique authenticated karma submitters
 * 133 anonymous users gave feedback (1.57%)
 * 2261 out of 3562 updates went through testing (63.48%)
 * 1317 testing updates were pushed *without* karma (58.25%)
 * 21 critical path updates pushed *without* karma
 * Time spent in testing:
   * mean = 11 days
   * median = 9 days
   * mode = 7 days
 * 4 updates automatically unpushed due to karma (0.11%)
   * 0 of which were critical path updates
 * 231 updates automatically pushed due to karma (6.49%)
   * 2 of which were critical path updates
 * Time spent in testing of updates that were pushed by karma:
   * mean = 11 days
   * median = 7 days
   * mode = 7 days
 * Time spent in testing of updates that were unpushed by karma:
   * mean = 9 days
   * median = 5 days
   * mode = 5 days
 * 2445 packages updated (top 10 shown)
* selinux-policy: 13
* jd: 12
* openoffice.org: 12
* gdb: 12
* ibus-pinyin: 11
* nautilus: 10
* kernel: 10
* evolution: 9
* libfm: 9
* libmx: 9


 Fedora 12


 * 4844 updates
 * 4291 stable updates
 * 371 testing updates
 * 113 pending updates
 * 69 obsolete updates
 * 2905 bugfix updates (59.97%)
 * 1054 enhancement updates (21.76%)
 * 201 security updates (4.15%)
 * 684 newpackage updates (14.12%)
 * 407 critical path updates (8.40%)
 * 960 updates received feedback (19.82%)
 * 16311 +0 comments
 * 1899 +1 comments
 * 554 -1 comments
 * 758 unique authenticated karma submitters
 * 576 anonymous users gave feedback (5.33%)
 * 2873 out of 4844 updates went through testing (59.31%)
 * 2138 testing updates were pushed *without* karma (74.42%)
 * 188 critical path updates pushed *without* karma
 * Time spent in testing:
   * mean = 14 days
   * median = 13 days
   * mode = 17 days
 * 12 updates automatically unpushed due to karma (0.25%)
   * 4 of which were critical path updates
 * 133 updates automatically pushed due to karma (2.75%)
   * 13 of which were critical path updates
 * Time spent in testing of updates that were pushed by karma:
   * mean = 11 days
   * median = 7 days
   * mode = 7 days
 * Time spent in testing of updates that were unpushed by karma:
   * mean = 9 days
   * median = 5 days
   * mode = 5 days
 * 2902 packages updated (top 10 shown)
* qbittorrent: 25
* gdb: 25
* selinux-policy: 22
* kernel: 15
* xorg-x11-server: 14
* ibus: 13
* jd: 13
* abrt: 11
* gvfs: 11
* gtk2: 11

Re: bodhi statistics

2010-06-08 Thread Kevin Kofler
Luke Macken wrote:
 This report definitely conveys the shortcomings in our testing, however,
 it does show us improving with each release. For Fedora 13, we implemented
 the No Frozen Rawhide process with improved Critical Path policies, which
 were definitely a success. With these enhanced procedures, along with the
 upcoming implementation of AutoQA and the new Package update acceptance
 criteria
 (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I
 think we'll see these numbers drastically improve in the future.

Only because those numbers are taylored towards that very process (they 
measure the exact same things that process is going to enforce) and do not 
reflect the actual quality of the packages in any way.

You can make really anything a success by measuring the very symptoms of 
the process and calling them a metric of quality.

The reasons for which Bodhi karma (especially in its current incarnation) is 
a completely broken indicator of quality have been pointed out in several 
past threads.

Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-08 Thread Brandon Lozza
On Tue, Jun 8, 2010 at 7:46 PM, Kevin Kofler kevin.kof...@chello.at wrote:
 Luke Macken wrote:
 This report definitely conveys the shortcomings in our testing, however,
 it does show us improving with each release. For Fedora 13, we implemented
 the No Frozen Rawhide process with improved Critical Path policies, which
 were definitely a success. With these enhanced procedures, along with the
 upcoming implementation of AutoQA and the new Package update acceptance
 criteria
 (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I
 think we'll see these numbers drastically improve in the future.

 Only because those numbers are taylored towards that very process (they
 measure the exact same things that process is going to enforce) and do not
 reflect the actual quality of the packages in any way.

 You can make really anything a success by measuring the very symptoms of
 the process and calling them a metric of quality.

 The reasons for which Bodhi karma (especially in its current incarnation) is
 a completely broken indicator of quality have been pointed out in several
 past threads.

        Kevin Kofler

 --
 devel mailing list
 devel@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/devel


I'll have to agree with Kevin. I can't how any of those numbers
represent the quality of anything.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-08 Thread Matthias Clasen
On Tue, 2010-06-08 at 16:51 -0400, Luke Macken wrote:

 
  Fedora 13
 
 
  * 3562 updates
  * 3065 stable updates
  * 427 testing updates
  * 62 pending updates
  * 8 obsolete updates

Hey Luke, 

are these the numbers of F13 updates since we branched, or since the F13
release ? The numbers certainly look like the former, but I think the
latter would be much more interesting.


Matthias

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-08 Thread Stephen John Smoogen
On Tue, Jun 8, 2010 at 6:41 PM, Brandon Lozza bran...@pwnage.ca wrote:
 On Tue, Jun 8, 2010 at 7:46 PM, Kevin Kofler kevin.kof...@chello.at wrote:
 Luke Macken wrote:
 This report definitely conveys the shortcomings in our testing, however,
 it does show us improving with each release. For Fedora 13, we implemented
 the No Frozen Rawhide process with improved Critical Path policies, which
 were definitely a success. With these enhanced procedures, along with the
 upcoming implementation of AutoQA and the new Package update acceptance
 criteria
 (https://fedoraproject.org/wiki/Package_update_acceptance_criteria), I
 think we'll see these numbers drastically improve in the future.

 Only because those numbers are taylored towards that very process (they
 measure the exact same things that process is going to enforce) and do not
 reflect the actual quality of the packages in any way.

 You can make really anything a success by measuring the very symptoms of
 the process and calling them a metric of quality.

 The reasons for which Bodhi karma (especially in its current incarnation) is
 a completely broken indicator of quality have been pointed out in several
 past threads.


 I'll have to agree with Kevin. I can't how any of those numbers
 represent the quality of anything.

Well the only person I see mentioning quality is Kevin. And for some
reason he is expecting it immediately when Luke said 3 things:

1) This was what he could initially see.
2) He thinks he sees improvement of what was measured, but he is
interested in what would be better indicators.

 --
 devel mailing list
 devel@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/devel




-- 
Stephen J Smoogen.
“The core skill of innovators is error recovery, not failure avoidance.”
Randy Nelson, President of Pixar University.
We have a strategic plan. It's called doing things.
— Herb Kelleher, founder Southwest Airlines
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: bodhi statistics

2010-06-08 Thread Rahul Sundaram
On 06/09/2010 10:48 AM, Kevin Kofler wrote:
 Stephen John Smoogen wrote:
   
 Well the only person I see mentioning quality is Kevin. And for some
 reason he is expecting it immediately
 
 You can't claim that there are shortcomings in our testing (the exact 
 words Luke used!) without a metric of quality. A shortcoming in testing 
 means less testing was done than needed. If the packages have good quality, 
 that means more testing is NOT needed, no matter what the actual amount of 
 testing was.
   

Packages without any testing is of unknown quality.   If someone doesn't
test them before pushing, users are unknowingly playing that role. 

Rahul
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel