Another instance of #1 for the JS builds:
https://travis-ci.org/apache/arrow/jobs/498967250#L992

I filed https://issues.apache.org/jira/browse/ARROW-4695 about it before
seeing this thread. As noted there I was able to replicate the timeout on
my laptop at least once. I didn't think to monitor memory usage to see if
that was the cause.

On Wed, Feb 27, 2019 at 6:52 AM Francois Saint-Jacques <
fsaintjacq...@gmail.com> wrote:

> I think we're witnessing multiple issues.
>
> 1. Travis seems to be slow (is it an OOM issue?)
>   - https://travis-ci.org/apache/arrow/jobs/499122041#L1019
>   - https://travis-ci.org/apache/arrow/jobs/498906118#L3694
>   - https://travis-ci.org/apache/arrow/jobs/499146261#L2316
> 2. https://issues.apache.org/jira/browse/ARROW-4694 detect-changes.py is
> confused
> 3. https://issues.apache.org/jira/browse/ARROW-4684 is failing one python
> test consistently
>
> #2 doesn't help with #1, it could be related to PR based out of "old"
> commits and the velocity of our project. I've suggested that we disable the
> failing test in #3 until resolved since it affects all C++ PRs.
>
> On Tue, Feb 26, 2019 at 5:01 PM Wes McKinney <wesmck...@gmail.com> wrote:
>
> > Here's a build that just ran
> >
> >
> >
> https://travis-ci.org/apache/arrow/builds/498906102?utm_source=github_status&utm_medium=notification
> >
> > 2 failed builds
> >
> > * ARROW-4684
> > * Seemingly a GLib Plasma OOM
> > https://travis-ci.org/apache/arrow/jobs/498906118#L3689
> >
> > 24 hours ago:
> >
> https://travis-ci.org/apache/arrow/builds/498501983?utm_source=github_status&utm_medium=notification
> >
> > * The same GLib Plasma OOM
> > * Rust try_from bug that was just fixed
> >
> > It looks like that GLib test has been failing more than it's been
> > succeeding (also failed in the last build on Feb 22).
> >
> > I think it might be worth setting up some more "annoying"
> > notifications when failing builds persist for a long time.
> >
> > On Tue, Feb 26, 2019 at 3:37 PM Michael Sarahan <msara...@gmail.com>
> > wrote:
> > >
> > > Yes, please let us know.  We definitely see 500's from anaconda.org,
> > though
> > > I'd expect less of them from CDN-enabled channels.
> > >
> > > On Tue, Feb 26, 2019 at 3:18 PM Uwe L. Korn <m...@uwekorn.com> wrote:
> > >
> > > > Hello Wes,
> > > >
> > > > if there are 500er errors it might be useful to report them somehow
> to
> > > > Anaconda. They recently migrated conda-forge to a CDN enabled account
> > and
> > > > this could be one of the results of that. Probably they need to still
> > iron
> > > > out some things.
> > > >
> > > > Uwe
> > > >
> > > > On Tue, Feb 26, 2019, at 8:40 PM, Wes McKinney wrote:
> > > > > hi folks,
> > > > >
> > > > > We haven't had a green build on master for about 5 days now (the
> last
> > > > > one was February 21). Has anyone else been paying attention to
> this?
> > > > > It seems we should start cataloging which tests and build
> > environments
> > > > > are the most flaky and see if there's anything we can do to reduce
> > the
> > > > > flakiness. Since we are dependent on anaconda.org for build
> > toolchain
> > > > > packages, it's hard to control for the 500 timeouts that occur
> there,
> > > > > but I'm seeing other kinds of routine flakiness.
> > > > >
> > > > > - Wes
> > > > >
> > > >
> >
>

Reply via email to