Hi All,

I am currently trying to build out a spark job that would basically convert
a csv file into parquet.  From what I have seen it looks like spark sql is
the way to go and how I would go about this would be to load in the csv file
into an RDD and convert it into a schemaRDD by injecting in the schema via a
case class.

What I want to avoid is hard coding in the case class itself.  I want to
reuse this job and pass in a file that contains the schema i.e. an avro avsc
file or something similar.  I was wondering if there was a way to do this,
since I couldn't figure out how to create a case class dynamically... if
there are ways around creating a case class I am definitely open to trying
it out as well.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Converting-RDD-to-SchemaRDD-without-hardcoding-a-case-class-in-scala-tp21851.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to