Thanks Michael.

I used Parquet files and it could able to solve my initial problem to some
extent (i.e. loading data from one context and reading it from another
context). 

But there I could see another issue. I need to load the parquet file every
time I create the JavaSQLContext using parquetFile method on that (for the
creation of JavaSchemaRDD) and need to register as temp table using
registerTempTable for the querying purpose. It seems to be a problem when
our web application is in cluster mode. Because I need to load the parquet
files on each node. Could you please advice me on this?

Thanks in advance.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/loading-querying-schemaRDD-using-SparkSQL-tp18052p18841.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to