Hi George,

does spark version that you use on the cluster matches the one zeppelin is
build with <http://zeppelin.incubator.apache.org/docs/install/install.html>
?
API that you use was introduced only in spark 1.4 and it is not the default
version yet <https://github.com/apache/incubator-zeppelin/pull/99>

You can check by running simple scala paragraph with `sc.version`

--
Alex

On Wed, Jun 24, 2015 at 11:51 AM, George Koshy <gkos...@gmail.com> wrote:

> Please help,
> I get this error
> error: value read is not a member of org.apache.spark.sql.SQLContext
> val df =
> sqlContext.read.format("com.databricks.spark.csv").option("header",
> "true").load("filename.csv")
>
>
> My code is as follows:
> import org.apache.spark.SparkContext
>
> %dep
> com.databricks:spark-csv_2.11:1.0.3
>
> import org.apache.spark.sql.SQLContext val sqlContext = new SQLContext(sc)
> val df =
> sqlContext.read.format("com.databricks.spark.csv").option("header",
> "true").load("fileName.csv")
>
> --
> Sincerely!
> George Koshy,
> Richardson,
> in.linkedin.com/in/gkoshyk/
>



-- 
--
Kind regards,
Alexander.

Reply via email to