Hello,

I am trying to find out what's the best way to use schemas in Spark.  For
example, in Apache Pig we can LOAD a file AS ( col1 :chararray, col2:
double) etc.

In Spark/Scala what's the best way of loading a file with schema?

When I googled I came across this nice tutorial about Parquet which looks
promising, but before I invest my time into it, would like to know if
Parquet is the recommended way - or there are other alternatives.

Any guidance would be appreciated.  Thanks.

Reply via email to