I have not had any success building using sbt/sbt on windows.
However, I have been able to binary by using maven command directly.

From: Richard Eggert [mailto:richard.egg...@gmail.com]
Sent: Sunday, October 25, 2015 12:51 PM
To: Ted Yu <yuzhih...@gmail.com>
Cc: User <user@spark.apache.org>
Subject: Re: Error building Spark on Windows with sbt

Yes, I know, but it would be nice to be able to test things myself before I 
push commits.

On Sun, Oct 25, 2015 at 3:50 PM, Ted Yu 
<yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>> wrote:
If you have a pull request, Jenkins can test your change for you.

FYI

On Oct 25, 2015, at 12:43 PM, Richard Eggert 
<richard.egg...@gmail.com<mailto:richard.egg...@gmail.com>> wrote:
Also, if I run the Maven build on Windows or Linux without setting 
-DskipTests=true, it hangs indefinitely when it gets to 
org.apache.spark.JavaAPISuite.

It's hard to test patches when the build doesn't work. :-/

On Sun, Oct 25, 2015 at 3:41 PM, Richard Eggert 
<richard.egg...@gmail.com<mailto:richard.egg...@gmail.com>> wrote:
By "it works", I mean, "It gets past that particular error". It still fails 
several minutes later with a different error:

java.lang.IllegalStateException: impossible to get artifacts when data has not 
been loaded. IvyNode = org.scala-lang#scala-library;2.10.3


On Sun, Oct 25, 2015 at 3:38 PM, Richard Eggert 
<richard.egg...@gmail.com<mailto:richard.egg...@gmail.com>> wrote:

When I try to start up sbt for the Spark build,  or if I try to import it in 
IntelliJ IDEA as an sbt project, it fails with a "No such file or directory" 
error when it attempts to "git clone" sbt-pom-reader into 
.sbt/0.13/staging/some-sha1-hash.

If I manually create the expected directory before running sbt or importing 
into IntelliJ, then it works. Why is it necessary to do this,  and what can be 
done to make it not necessary?

Rich



--
Rich



--
Rich



--
Rich

Reply via email to