I'm looking to do the following with my Spark dataframe
(1) val df1 = df.groupBy(<long timestamp column>)
(2) val df2 = df1.sort(<long timestamp column>)
(3) val df3 = df2.mapPartitions(<set of aggregating functions>)

I can already groupBy the column (in this case a long timestamp) - but have
no idea how then to ensure the returned GroupedData is then sorted by the
same timeStamp and the mapped to my set of functions

Appreciate any help
Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Dataframe-Grouping-Sorting-Mapping-tp27821.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to