Hi,

I have a little question. I want to develop a spark based application, but
spark depend to hadoop-client library. I think it's not necessary (spark
standalone) so I excluded from sbt file.. the result is interesting. My
trait where I create the spark context not compiled.

The error:
...
 scala.reflect.internal.Types$TypeError: bad symbolic reference. A
signature in SparkContext.class refers to term mapred
[error] in package org.apache.hadoop which is not available.
[error] It may be completely missing from the current classpath, or the
version on
[error] the classpath might be incompatible with the version used when
compiling SparkContext.class.
...

I used this class for integration test. I'm using windows and I don't want
to using hadoop for integration test. How can I solve this?

Thanks
Janos

Reply via email to