We want to persist table schema of parquet file so as to use spark-sql cli
on that table later on? Is it possible or is spark-sql cli only good for
tables in hive metastore ? We are reading parquet data using this example:

// Read in the parquet file created above.  Parquet files are
self-describing so the schema is preserved.// The result of loading a
Parquet file is also a SchemaRDD.val parquetFile =
sqlContext.parquetFile("people.parquet")
//Parquet files can also be registered as tables and then used in SQL
statements.parquetFile.registerTempTable("parquetFile")

Reply via email to