On May 20, 2008, at 3:27 PM, Marcin Owsiany wrote:

>
> Hi all,
>
> After I managed to invoke the rspec test runner (see #1237), I  
> realized
> with sadness that they are far from usable. I got:
>   3053 examples, 219 failures, 30 pending - when running as root
> and
>   3053 examples, 145 failures, 31 pending - when running as non-root
> This is on master branch.

I get zero failures on master or 0.24.x, although I haven't run either  
of them as root.

  I recently discovered that my ~/.puppet directory was sometimes  
getting modified by tests, so last week or so I chowned the whole  
thing to root; this caught a few tests that I had to further mock, but  
otherwise, no tests should fail for anyone ever.

I definitely consider this a very big problem.

>
> Before you ask - no, the ones failing as non-root are not a subset of
> the ones failing as root.
>
> For me personally (and I think for anyone new to puppet, who would  
> like
> to produce high-quality code) this is a huge barrier to puppet
> development.

I completely agree, and I guess the extent of the failures is a clear  
indication of the lack of breadth in people running the tests.
>
> So I have several questions:
>
> - do we at all agree that there should be NO failing tests at any  
> point
>   in time, and that any failing test is a bug? Even in master branch?
>   And as a result, no commit should introduce a test failure?

Well, James and I certainly agree, and we're the respective  
maintainers of the stable and dev branches.

Using 'autotest' can go a long way toward this, but you still have to  
remember to run the whole test suite before committing, and you also  
need to run the test suite on other platforms, which I assume is the  
problem here.

>
> - is there anyone at all for whom all tests work at the moment? Or  
> am I
>   just very unlucky and it does work for everyone except for me? :)

Looks like others are getting similar failures, but they all work for  
me.  I've never left a failing test in spec/ for more than one commit  
(i.e., I've accidentally committed one, but fixed it asap).

I consider every failing test in either test/ or spec/ a bug, although  
9/10s of the time, failing bugs in test/ will need to get rewritten in  
spec/ or removed rather than getting fixed.

>
> - how do we tackle this problem? I cannot go through 200 failing tests
>   alone and fix them in reasonable time. We need to split the work
>   somehow.

Paul is basically right -- the only way to do it is to approach them  
one at a time.  I have to believe, else I'd go insane, that many of  
your failing tests are related; ideally you'd pick a pattern of  
failure and see if you could track down the root cause.  This should  
allow you to get rid of swathes of failures, leaving the smaller, one- 
offs to tackle later.

>
> - how do we make sure that this problem does not reappear? Should we
>   set up some continuous integration environment and assume "project
>   culture" would convince developers to fix the problems? Or should we
>   go a step further and only ever release software tested and built by
>   the continuous integration service?

If we have a continuous integration service, then I would definitely  
never release a product that had non-green tests on any supported  
platform.  Is anyone in a position to set such a thing up and maintain  
it?  I'm willing to cover the costs of the ec2 instances, as long as  
they're only running during the actual test process (e.g., once a day  
for an hour or so, rather than 24hrs a day).

One of the benefits of using ec2 is that it would basically allow  
anyone to upload an AMI of their favorite platform running Puppet, and  
it should be straightforward to integrate that platform into the  
continuous test service while also making it easy for people to give  
Puppet a go on ec2.

One implication of requiring that all tests for supported platforms be  
green, of course, is that every supported platform needs at least one  
advocate willing to tackle platform-specific problems.  David  
Lutterkort is always on hand to tackle Red Hat problems, and Russ  
Allbery and many others seem to always have the answers for Debian;  
these platforms wouldn't be so stably supported by Puppet without this  
dedication.

If there's a platform you care about, run the Puppet tests on it and  
file bugs when they fail, at least until we get a continuous  
integration service.

>
>   I know someone (Luke?) mentioned that it would be nice to have a
>   build farm which would run the tests on all supported platforms. But
>   I think that running them on a regular basis even on a single
>   platform would be much better than the current situation.

I run them essentially every day on my mac, and often on my debian  
box.  I never knowingly commit a broken test, and do my best to never  
unknowingly commit them.

The only conclusion I can reach is that we need others running tests,  
or we need to make it more obvious that if tests fail, you should file  
a bug.

And, of course, we need people like Paul to help make the current  
tests better.

-- 
Never interrupt your enemy when he is making a mistake.
     --Napolean Bonaparte
---------------------------------------------------------------------
Luke Kanies | http://reductivelabs.com | http://madstop.com


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to