Hi Robert,
I’m not sure about sbt; we’re currently using Maven to build. We do create a
single jar though, via the Maven shade plugin. Our project has three
components, and we routinely distribute the jar for our project’s CLI out
across a cluster. If you’re interested, here are our project’s m
On 6/29/14, FRANK AUSTIN NOTHAFT wrote:
> Robert,
>
> You can build a Spark application using Maven for Hadoop 2 by adding a
> dependency on the Hadoop 2.* hadoop-client package. If you define any
> Hadoop Input/Output formats, you may also need to depend on the
> hadoop-mapreduce package.
Thank
Hi Robert,
I am using the following maven command to build spark 1.0 for hadoop 2 +
hbase 0.96.2:
mvn -Dhadoop.version=2.3.0 -Dprotobuf.version=2.5.0 -DskipTests clean
package
Regards,
siyuan
On Sun, Jun 29, 2014 at 3:20 PM, Robert James
wrote:
> Although Spark's home page offers binaries f
Robert,
You can build a Spark application using Maven for Hadoop 2 by adding a
dependency on the Hadoop 2.* hadoop-client package. If you define any
Hadoop Input/Output formats, you may also need to depend on the
hadoop-mapreduce package.
Regards,
Frank Austin Nothaft
fnoth...@berkeley.edu
fnoth
Although Spark's home page offers binaries for Spark 1.0.0 with Hadoop
2, the Maven repository only seems to have one version, which uses
Hadoop 1.
Is it possible to use a Maven link and Hadoop 2? What is the id?
If not: How can I use the prebuilt binaries to use Hadoop 2? Do I just
copy the lib/