I am running unit tests on Spark 1.3.1 with sbt test and besides the unit
tests being incredibly slow I keep running into
java.lang.ClassNotFoundException: org.apache.spark.storage.RDDBlockId
issues. Usually this means a dependency issue, but I wouldn't know from
where...

Any help is greatly appreciated

My build.sbt:

libraryDependencies ++=  Seq(
  "org.scalaz"              %% "scalaz-core" % "7.1.2" excludeAll
ExclusionRule(organization = "org.slf4j"),
  "com.typesafe.play"       %% "play-json" % "2.3.4" excludeAll
ExclusionRule(organization = "org.slf4j"),
  "org.apache.spark"        %% "spark-core" % "1.3.1" % "provided" 
withSources() excludeAll (ExclusionRule(organization = "org.slf4j"),
ExclusionRule("org.spark-project.akka", "akka-actor_2.10")),
  "org.apache.spark"        %% "spark-graphx" % "1.3.1" % "provided"
withSources() excludeAll (ExclusionRule(organization = "org.slf4j"),
ExclusionRule("org.spark-project.akka", "akka-actor_2.10")),
  "org.apache.cassandra"    % "cassandra-all" % "2.1.6",
  "org.apache.cassandra"    % "cassandra-thrift" % "2.1.6",
  "com.typesafe.akka" %% "akka-actor" % "2.3.11",
  "com.datastax.cassandra"  % "cassandra-driver-core" % "2.1.6"
withSources() withJavadoc() excludeAll (ExclusionRule(organization =
"org.slf4j"),ExclusionRule(organization =
"org.apache.spark"),ExclusionRule(organization = "com.twitter",name =
"parquet-hadoop-bundle")),
  "com.github.nscala-time"  %% "nscala-time" % "1.2.0" excludeAll
ExclusionRule(organization = "org.slf4j") withSources(),
  "com.datastax.spark"      %% "spark-cassandra-connector-embedded" %
"1.3.0-M2" excludeAll (ExclusionRule(organization =
"org.slf4j"),ExclusionRule(organization =
"org.apache.spark"),ExclusionRule(organization = "com.twitter",name =
"parquet-hadoop-bundle")),
  "com.datastax.spark"      %% "spark-cassandra-connector" % "1.3.0-M2"
excludeAll (ExclusionRule(organization =
"org.slf4j"),ExclusionRule(organization =
"org.apache.spark"),ExclusionRule(organization = "com.twitter",name =
"parquet-hadoop-bundle")),
  "org.slf4j"               % "slf4j-api"            % "1.6.1",
   "com.twitter"            % "jsr166e" % "1.1.0",
  "org.slf4j"               % "slf4j-nop" % "1.6.1" % "test",
  "org.scalatest"           %% "scalatest" % "2.2.1" % "test" excludeAll
ExclusionRule(organization = "org.slf4j")
)

and my spark test settings

(spark.kryo.registrator,com.my.spark.MyRegistrator)
(spark.eventLog.dir,)
(spark.driver.memory,16G)
(spark.kryoserializer.buffer.mb,512)
(spark.akka.frameSize,5)
(spark.shuffle.spill,false)
(spark.default.parallelism,8)
(spark.shuffle.consolidateFiles,false)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.shuffle.spill.compress,false)
(spark.driver.host,10.10.68.66)
(spark.akka.timeout,300)
(spark.driver.port,55328)
(spark.eventLog.enabled,false)
(spark.cassandra.connection.host,127.0.0.1)
(spark.cassandra.connection.ssl.enabled,false)
(spark.master,local[8])
(spark.cassandra.connection.ssl.trustStore.password,password)
(spark.fileserver.uri,http://10.10.68.66:55329)
(spark.cassandra.auth.username,username)
(spark.local.dir,/tmp/spark)
(spark.app.id,local-1436229075894)
(spark.storage.blockManagerHeartBeatMs,300000)
(spark.executor.id,<driver>)
(spark.cassandra.auth.password,)
(spark.storage.memoryFraction,0.5)
(spark.speculation,false)
(spark.tachyonStore.folderName,spark-8c33e537-3279-4059-8e4d-6902329bb4ca)
(spark.app.name,Count all entries 217885402)
(spark.shuffle.compress,false)





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Unit-tests-RDDBlockId-not-found-tp23657.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to