Re: Parameterized types and Datasets - Spark 2.1.0

2017-02-01 Thread Don Drake
I imported that as my first command in my previous email. I'm using a spark-shell. scala> import org.apache.spark.sql.Encoder import org.apache.spark.sql.Encoder scala> Any comments regarding importing implicits in an application? Thanks. -Don On Wed, Feb 1, 2017 at 6:10 PM, Michael

Re: Parameterized types and Datasets - Spark 2.1.0

2017-02-01 Thread Michael Armbrust
This is the error, you are missing an import: :13: error: not found: type Encoder abstract class RawTable[A : Encoder](inDir: String) { Works for me in a REPL.

Re: Parameterized types and Datasets - Spark 2.1.0

2017-02-01 Thread Don Drake
Thanks for the reply. I did give that syntax a try [A : Encoder] yesterday, but I kept getting this exception in a spark-shell and Zeppelin browser. scala> import org.apache.spark.sql.Encoder import org.apache.spark.sql.Encoder scala> scala> case class RawTemp(f1: String, f2: String, temp:

Re: Parameterized types and Datasets - Spark 2.1.0

2017-02-01 Thread Michael Armbrust
You need to enforce that an Encoder is available for the type A using a context bound . import org.apache.spark.sql.Encoder abstract class RawTable[A : Encoder](inDir: String) { ... } On Tue, Jan 31, 2017 at 8:12 PM, Don Drake

Parameterized types and Datasets - Spark 2.1.0

2017-01-31 Thread Don Drake
I have a set of CSV that I need to perform ETL on, with the plan to re-use a lot of code between each file in a parent abstract class. I tried creating the following simple abstract class that will have a parameterized type of a case class that represents the schema being read in. This won't