> If that's all, jumping straight to "throw away all of buildbot" seems like
> an overreaction.
I am trying to be pragmatic.
I think that the whole code review / PR contribution to Twisted for
external (but also for team member) is a bad experience and is not
helping with the development of Twisted.
So we need a "shock" in order to put the things back together :)
We all have little time and I am trying to simplify the process and
let people know that we want them to contribute to Twisted.
> Ideally all the supported buildbots should be passing. It's a real shame
> that the offline buildbots do not report any status,
Buildbot should send email when a slave is down to buildbot at
twistedmatrix.com ... not sure who is reading those emails.
> The "required" marker makes it impossible to merge changes without a passing
In such a case, maybe the "administrators" can be pinged and do a final review.
> But, I don't have the time to do much more than write this email, so if we
> have no other volunteers for maintenance, I will support your decision to
> tear down the buildbots for now.
I tried to maintain the Buildbots and have a Vagrant master and slave
to allow development... but I failed to coordinate with the other team
I failed to have a testing environment which always works.
Many times there were changes made in production which went out of
sync with the Vagrant VM.
I know that we are all busy, but is not ok to just destory some slaves
wihtout updating the buildbot configuration... we end up with huge
I am a fan of Buildbot and I am using Buildbot for my project.
With 30+ builders, running them in stages was the only way to the
handle spurious failures.
Now Travis-CI also has stages... and Circle-CI also has stages.
I can try to create a brand new buildmaster and rebuild it using
latest buildbot and with a simplified configuration... just calling a
set of scripts in each step and making sure we can replicate it in
vagrant or have a separate staging server for doing dev work.
A simplified one, would be one just calling scripts in the branch, so
that we have minimal configuration and buildbot and to match.
Also, the new buildmaster can look for ways to allow non-team members
to run tests...maybe use GitHub api to discover if they have previosly
commits in trunk.
The build history will not be available, but we can keep the old
master in read-only mode.
I would want to do whatever it takes to not merge with red tests. They
are contagious and can lead to accidentally ignoring an error which is
related to the changes but might not look like one.
See https://github.com/twisted/twisted/pull/946 where there was a
failure which looks unrelated, but I think that might be related.
One option is to break the tests into smaller builds and be able to
retry only those builds.
Ex. instead of running select + poll + epool in a single run, break
them into 3 separate builds.
On Buildbot you can restart a builder. On Travis-CI you can restart a
single job... and maybe we can lobby Appveyor to allow restarting a
job (and not the whole PR gig).
We can try Circle-CI.
They don't offer any free versions. Even for open source.
When I did the initial work for Twisted with Travis and Appveyor I
contacted Circle-CI to see if we can get a discount.
They offered OSX - Seed plan which comes with 500 minutes/month - free.
I stopped as the people on IRC told me that Circle-CI is not better than Travis.
I am happy to try again with Circle-CI
We might go over 500 minutes. I suggest running the tests in stages.
Run twistedchecker/pyflakes/newsfragment/Ubuntu tests first.
Only when they all pass we should trigger Windows and OSX tests.
I am also running the tests on stage... For example, Debian/RHEL/SUSE
pass 99.99% if Ubuntu pass... so those tests are executed only later
in the stage.
I don't have much time to contribute to Twisted infrastructure, and I
would like to spend the available time doing reviews and helping
people contribute to Twisted.
If there is a better plan, I am happy to go with that.
Thanks for your time :)
Twisted-Python mailing list