it yo



On Jörn Franke <jornfra...@gmail.com>, Mar 28, 2017 12:11 AM wrote:

Usually you define the dependencies to the Spark library as provided. You also seem to mix different Spark versions which should be avoided.
The Hadoop library seems to be outdated and should also only be provided.

The other dependencies you could assemble in a fat jar.

On 27 Mar 2017, at 21:25, Anahita Talebi <anahita.t.am...@gmail.com> wrote:

Hi friends,

I have a code which is written in Scala. The scala version 2.10.4 and Spark version 1.5.2 are used to run the code.

I would like to upgrade the code to the most updated version of spark, meaning 2.1.0.

Here is the build.sbt:

import AssemblyKeys._

assemblySettings

name := "proxcocoa"

version := "0.1"

scalaVersion := "2.10.4"

parallelExecution in Test := false

{
  val excludeHadoop = ExclusionRule(organization = "org.apache.hadoop")
  libraryDependencies ++= Seq(
    "org.slf4j" % "slf4j-api" % "1.7.2",
    "org.slf4j" % "slf4j-log4j12" % "1.7.2",
    "org.scalatest" %% "scalatest" % "1.9.1" % "test",
    "org.apache.spark" % "spark-core_2.10" % "1.5.2" excludeAll(excludeHadoop),
    "org.apache.spark" % "spark-mllib_2.10" % "1.5.2" excludeAll(excludeHadoop),
    "org.apache.spark" % "spark-sql_2.10" % "1.5.2" excludeAll(excludeHadoop),
    "org.apache.commons" % "commons-compress" % "1.7",
    "commons-io" % "commons-io" % "2.4",
    "org.scalanlp" % "breeze_2.10" % "0.11.2",
    "com.github.fommil.netlib" % "all" % "1.1.2" pomOnly(),
    "com.github.scopt" %% "scopt" % "3.3.0"
  )
}

{
  val defaultHadoopVersion = "1.0.4"
  val hadoopVersion =
    scala.util.Properties.envOrElse("SPARK_HADOOP_VERSION", defaultHadoopVersion)
  libraryDependencies += "org.apache.hadoop" % "hadoop-client" % hadoopVersion
}

libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.5.0"

resolvers ++= Seq(
  "Local Maven Repository" at Path.userHome.asFile.toURI.toURL + ".m2/repository",
  "Typesafe" at "http://repo.typesafe.com/typesafe/releases",
  "Spray" at "http://repo.spray.cc"
)

mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
  {
    case PathList("javax", "servlet", xs @ _*)           => MergeStrategy.first
    case PathList(ps @ _*) if ps.last endsWith ".html"   => MergeStrategy.first
    case "application.conf"                              => MergeStrategy.concat
    case "reference.conf"                                => MergeStrategy.concat
    case "log4j.properties"                              => MergeStrategy.discard
    case m if m.toLowerCase.endsWith("manifest.mf")      => MergeStrategy.discard
    case m if m.toLowerCase.matches("meta-inf.*\\.sf$")  => MergeStrategy.discard
    case _ => MergeStrategy.first
  }
}

test in assembly := {}

-----------------------------------------------------------
I downloaded the spark 2.1.0 and change the version of spark and scalaversion in the build.sbt. But unfortunately, I was failed to run the code.

Does anybody know how I can upgrade the code to the most recent spark version by changing the build.sbt file?

Or do you have any other suggestion?

Thanks a lot,
Anahita

--------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to