You'll want to use the spark-csv package, which is included in Spark 2.0.
The repository documentation has some great usage examples.
https://github.com/databricks/spark-csv
Thanks,
Kevin
On Thu, Sep 22, 2016 at 8:40 PM, Dan Bikle wrote:
> hello spark-world,
>
> I am new
hello spark-world,
I am new to spark.
I noticed this online example:
http://spark.apache.org/docs/latest/ml-pipeline.html
I am curious about this syntax:
// Prepare training data from a list of (label, features) tuples.
val training = spark.createDataFrame(Seq(
(1.0,