All of that support code uses Hadoop-related classes, like
OutputFormat, to do the writing to Parquet format. There's a Hadoop
code dependency in play here even if the bytes aren't going to HDFS.

On Tue, Jun 3, 2014 at 10:10 PM, k.tham <kevins...@gmail.com> wrote:
> I've read through that thread, and it seems for him, he needed to add a
> particular hadoop-client dependency.
> However, I don't think I should be required to do that as I'm not reading
> from HDFS.
>
> I'm just running a straight up minimal example, in local mode, and out of
> the box.
>
> Here's an example minimal project that reproduces this error:
>
> https://github.com/ktham/spark-parquet-example
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-s-saveAsParquetFile-throws-java-lang-IncompatibleClassChangeError-tp6837p6846.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to