I am trying to use mapPartitions on DataFrame.

Example:

import spark.implicits._
val df: DataFrame = Seq((1,"one"), (2, "two")).toDF("id", "name")
df.mapPartitions(_.take(1))

I am getting:

Unable to find encoder for type stored in a Dataset.  Primitive types (Int,
String, etc) and Product types (case classes) are supported by importing
spark.implicits._  Support for serializing other types will be added in
future releases.

Since DataFrame is Dataset[Row], I was expecting encoder for Row to be
there.

What's wrong with my code ?


-- 

Dragiša Krsmanović | Platform Engineer | Ticketfly

dragi...@ticketfly.com

@ticketfly <https://twitter.com/ticketfly> | ticketfly.com/blog |
facebook.com/ticketfly

Reply via email to