A couple of times in the last couple weeks, the master running our UI
has crashed/exited/ceased to run. This most recent time it was a
MemoryError, backed up by the kernel log saying it was OOM. Current run
shows it using 43.1G memory. Seems like a lot, doesn't it? The other 2
larger masters (the ones actually running builds) are about 2G, and the
remaining, smallest one is about 1.7G.
What does the UI hold onto that makes it so large?
I'm supposed to get some heap profiling thingy wedged into buildbot so I
can try to figure it out. My current plan is to introduce a new build
step (like our current custom ones) that triggers the profiling process,
create a sort of dummy build using the step, and create a force
scheduler to trigger that build. That'll keep it on our UI master
anyway. Is there a better way to trigger such code in the master at
I'm also starting to get complaints (and I've noticed this myself), that
sometimes queued builds take a long time to start after the previous
build finishes. Sometimes on the order of a half-hour or more. Or at
least it appears so. I'm going on what the UI tells me. I haven't tried
delving into the logs of the master controlling the builds to match up
times. Any ideas? I'm afraid I'm not up on exactly how/when builds are
supposed to get started.
And out database seems to be accumulating integrity errors again.
users mailing list