Hi TD,
  Thanks for ur help...i am able to convert map to records using case class.
I am left with doing some aggregations. I am trying to do some SQL type
operations on my records set. My code looks like

 case class Record(ID:Int,name:String,score:Int,school:String)
//val records = jsonf.map(m => Record(m(0),m(1),m(2),m(3)))
val fields = jsonf.map(data =>
(data("type"),data("name"),data("score"),data("school")))
val results = fields.transform((rdd,time) => {
 rdd.registerAsTable("table1")
 sqlc.sql(select * from table1)
})

when i am trying to compile my code it  giving me 
jsonfile.scala:30: value registerAsTable is not a member of
org.apache.spark.rdd.RDD[(Any, Any, Any, Any)]

Please let me know if i am missing any thing.
And using Spark Streaming can i really use sql kind of operations on
Dstreams?





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Json-file-groupby-function-tp9618p9714.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to