Hi,
Am Tue, Sep 21, 2021 at 03:14:57PM +0100 schrieb Tim Jacomb:
> There was a clear reliable failure change in 2.309 which was caused by
> a minor resource increase required.
>
> But because the resources were so low on the CI (accidentally) it
> manifested as a problem there.
Just as a data
Sorry to have implied that any action was required of you; I should have
phrased this as more of a “heads-up, possible regression under
investigation here”.
--
You received this message because you are subscribed to the Google Groups
"Jenkins Developers" group.
To unsubscribe from this group
On Tue, Sep 21, 2021 at 7:04 AM Jesse Glick wrote:
> That was my best guess based on running `git bisect`: with the parallel class
> loading, the docs generator failed; without it, the generator worked.
But this is just _data_; it doesn't mean anything unless we extract
the _insights_ out of
> Sounds like the instability in core builds themselves was unrelated, a
coincidence?
I think it was another symptom compounded by us not having enough history
to see when it started very well.
There was a clear reliable failure change in 2.309 which was caused by a
minor resource increase
On Mon, Sep 20, 2021 at 4:24 PM Basil Crow wrote:
> I do not think it is appropriate to imply that a developer caused a
> regression […] simply because an operational failure occurred.
>
That was my best guess based on running `git bisect`: with the parallel
class loading, the docs generator
Given this has been going on for weeks including people looking at flakey
tests and a lot of re-running of builds it was not clear at the time this
was because resources had actually changed.
We were looking for the root cause and thought you may have had an insight
into it, I would definitely
On Mon, Sep 20, 2021 at 12:57 PM Jesse Glick wrote:
>
> Any notion yet of why that would be?
Why do you ask? The maximum heap size seems to have been 1516 MiB in
e.g.
https://ci.jenkins.io/job/Infra/job/pipeline-steps-doc-generator/job/master/299/consoleFull
but had dropped to 954 MiB by e.g.
Pending https://github.com/jenkins-infra/jenkins-infra/pull/1872 being
approved I've manually changed the settings and we finally have a passing
build for stable.
(in https://github.com/jenkinsci/jenkins/pull/5729)
On Mon, 20 Sept 2021 at 20:57, Jesse Glick wrote:
> On Mon, Sep 20, 2021 at
On Mon, Sep 20, 2021 at 3:37 PM Basil Crow wrote:
> I *do* see evidence that registering AntClassLoader (specifically) as
> parallel-capable has increased the heap size requirement
>
Any notion yet of why that would be? It should be loading the same set of
classes, just at slightly different
I see no evidence that jenkinsci/jenkins#5687 has introduced a leak,
so I do not think it should be reverted. I _do_ see evidence that
registering AntClassLoader (specifically) as parallel-capable has
increased the heap size requirement for pipeline-steps-doc-generator:
1280 MiB seems to be
Thanks for tracking down where the memory issue appears to be coming from
in
https://github.com/jenkins-infra/pipeline-steps-doc-generator/pull/94#issuecomment-923094344
I think the other issue is the CPU count appears to have been accidentally
reverted to 2 cores =/
It was increased here:
So we have the JNA upgrade, XStream upgrade, and parallel class loading. I
will try to bisect the cause.
--
You received this message because you are subscribed to the Google Groups
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hello
We're seeing quite unstable core builds recently.
Manifesting in a couple of ways: 'OutOfMemoryError: Java heap space' and
agents failing to connect along with very slow tests (they hit the 3hour
time out).
This appears to have begun in 2.309.
We have a very stable master branch:
13 matches
Mail list logo