Yeah, consistently failing nightly builds are just burning resources.
Until someone starts working to fix it, we shouldn't keep submitting
the job. Too bad; I thought build times and resource usage were
becoming more manageable on branch-2.
If anyone has cycles to work on this, the job is here
> On May 15, 2018, at 10:16 AM, Chris Douglas wrote:
>
> They've been failing for a long time. It can't install bats, and
> that's fatal? -C
The bats error is new and causes the build to fail enough that it
produces the email output. For the past few months, it
They've been failing for a long time. It can't install bats, and
that's fatal? -C
On Tue, May 15, 2018 at 9:43 AM, Allen Wittenauer
wrote:
>
>
> FYI:
>
> I’m going to disable the branch-2 nightly jobs.
>
FYI:
I’m going to disable the branch-2 nightly jobs.
-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Allen, can we bump up the maven surefire heap size to max (if it already is
not) for the branch-2 nightly build and see if it helps?
Thanks,
Subru
On Tue, Oct 24, 2017 at 4:22 PM, Allen Wittenauer
wrote:
>
> > On Oct 24, 2017, at 4:10 PM, Andrew Wang
> On Oct 24, 2017, at 4:10 PM, Andrew Wang wrote:
>
> FWIW we've been running branch-3.0 unit tests successfully internally, though
> we have separate jobs for Common, HDFS, YARN, and MR. The failures here are
> probably a property of running everything in the same
ke a solid bug report to me.
> >>
> >>
> >>
> >> Thanks,?
> >>
> >>
> >> Junping
> >>
> >>
> >> ________________
> >> From: Sean Busbey <bus...@cloudera.com>
> >> Sent: Tuesday, O
___
>> From: Sean Busbey <bus...@cloudera.com>
>> Sent: Tuesday, October 24, 2017 2:20 PM
>> To: Junping Du
>> Cc: Allen Wittenauer; Hadoop Common; Hdfs-dev;
>> mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
>> Subject: Re: Apache Hadoop q
>
>
> From: Sean Busbey <bus...@cloudera.com>
> Sent: Tuesday, October 24, 2017 2:20 PM
> To: Junping Du
> Cc: Allen Wittenauer; Hadoop Common; Hdfs-dev;
> mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: Re: Apache Hadoop
oop Common; Hdfs-dev; mapreduce-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
Just curious, Junping what would "solid evidence" look like? Is the supposition
here that the memory leak is within HDFS test code rather
27 AM
> To: Hadoop Common
> Cc: Hdfs-dev; mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
>
> > On Oct 23, 2017, at 12:50 PM, Allen Wittenauer <a...@effectivemachines.com>
> wrote:
> >
; yarn-...@hadoop.apache.org
Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer <a...@effectivemachines.com>
> wrote:
>
>
>
> With no other information or access to go on, my current hunch is that one of
> the HDFS u
> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer
> wrote:
>
>
>
> With no other information or access to go on, my current hunch is that one of
> the HDFS unit tests is ballooning in memory size. The easiest way to kill a
> Linux machine is to eat all of the
With no other information or access to go on, my current hunch is that one of
the HDFS unit tests is ballooning in memory size. The easiest way to kill a
Linux machine is to eat all of the RAM, thanks to overcommit and that’s what
this “feels” like.
Someone should verify if 2.8.2 has the
Hi Allen,
I had set up the build (or intended to) in anticipation 2.9 release. Thanks
for fixing the configuration!
We did face HDFS tests timeouts in branch-2 when run together but
individually the tests pass:
https://issues.apache.org/jira/browse/HDFS-12620
Folks in HDFS, can you please take
Hi Allen,
I have filed https://issues.apache.org/jira/browse/YARN-7380 for the
timeline service findbugs warnings.
thanks
Vrushali
On Mon, Oct 23, 2017 at 11:14 AM, Allen Wittenauer wrote:
>
> I’m really confused why this causes the Yahoo! QA boxes to go
I’m really confused why this causes the Yahoo! QA boxes to go catatonic (!?!)
during the run. As in, never come back online, probably in a kernel panic.
It’s pretty consistently in hadoop-hdfs, so something is going wrong there… is
branch-2 hdfs behaving badly? Someone needs to run the
To whoever set this up:
There was a job config problem where the Jenkins branch parameter wasn’t passed
to Yetus. Therefore both of these reports have been against trunk. I’ve fixed
this job (as well as the other jobs) to honor that parameter. I’ve kicked off
a new run with these changes.
18 matches
Mail list logo