On Sun, Feb 19, 2012, at 05:03 PM, Jenkins wrote:
> See <http://ci.cyrusimap.org/job/cyrus-imapd-master/402/>
> 
> ------------------------------------------
> [...truncated 1246 lines...]
> for file in ./imtest.1 ./pop3test.1 ./nntptest.1 ./lmtptest.1
> ./smtptest.1 ./sivtest.1 ./mupdatetest.1 ./installsieve.1 ./sieveshell.1;
> \

There are at least four separate problems happening here, all at once :(

1) the Master.service_ipv6 test is failing because the older netstat program
   on ci.cyrusimap.org reports connect TCP/IPv6 sockets slightly differently
   than Cassandane was expecting.  I fixed that here

http://git.cyrusimap.org/cassandane/commit/?id=a76ec6226f0bad1bd81238a8b80360d58c894080

2) The Master.maxforkrate test was accidentally triggering an old and
   obscure race condition bug in the master process.  I made the test
   not trigger that bug anymore

http://git.cyrusimap.org/cassandane/commit/?id=10999db507143bdc668d49f59349f23ba61fc8e1

  and added a new test just for that bug

http://git.cyrusimap.org/cassandane/commit/?id=ad9bf14faddef02f070f5b82fb8f408e37826b1c

  and fixed the bug

http://git.cyrusimap.org/cyrus-imapd/commit/?id=d288a9ad66636ab406e537b1fb57f05f016e1f38
  

3) The Master.maxforkrate test has detected that the maxforkrate parameter is 
not being
   enforced correctly, which is strange because that code did in fact work fine 
in that
   test when last modified.  I suspect some kind of environmental problem is 
breaking
   the fork rate calculation, will investigate further.

4) Cassandane sometimes leaves master and lemming processes lying around.  I 
haven't
   been able to reproduce that problem, although I have "solved" it several 
times before.
   Those leaked processes are never cleaned up and hog the TCP ports that 
Cassandane
   expects to be able to use, causing subsequent Cassandane runs to fail 
spuriously.  I'm
   not entirely sure of the best way to address this, but I'm thinking of 
something like a
   sledgehammer which kills all processes running as the "cyrus" userid.

So sadly it will be another day or so before the build gets back to stable.

-- 
Greg.

Reply via email to