I took Gokhan’s PR and merged the master with it and compiling with 

mvn clean install package -Dhadoop.version=1.2.1

I get the same build error as the nightly.

Changing back to the master it builds fine. The default hadoop version is 1.2.1 
in master so I don’t need a profile or CLI options to build for my environment.

This seems like more than cosmic rays as Dmitriy guessed.

On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <[email protected]> wrote:

more likely spark thing .

the error is while using torrent broadcast. AFAIK that was not default
choice until recently.

On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <[email protected]> wrote:

> The nightly builds often due to running on an old machine and the failure
> is also a function of the number of concurrent jobs that are running.  If u
> look at the logs from the failure, it most likely would have failed due to
> a JVM crash (or something similar).  Its the daily builds that we need to
> ensure are not failing.
> 
> 
> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <[email protected]>
> wrote:
> 
>> I just built and tested with no problems.  Probably just Jenkins acting
> up.
>> 
>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
>> Spark bindings #1728
>>> From: [email protected]
>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
>>> To: [email protected]
>>> 
>>> At first blush this looks unrelated to the stuff I pushed to move to
>> Spark 1.1.0
>>> 
>>> The error is in snappy parsing during some R-like ops
>>> 
>>> I don’t use native snappy myself, is anyone else seeing this or is it
>> just  cosmic rays?
>>> 
>>> 
>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
>> [email protected]> wrote:
>>> 
>>> See <
>> 
> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
>>> 
>>> 
>>> 
>> 
> 

Reply via email to