.
Regards,
Anand.C
From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Friday, November 13, 2015 2:25 AM
To: Chandra Mohan, Ananda Vel Murugan
Cc: Michal Klos; user
Subject: Re: Partitioned Parquet based external table
Note that if you read in the table using sqlContext.read.parquet(...) or if
ichal.klo...@gmail.com]
> *Sent:* Thursday, November 12, 2015 6:32 PM
> *To:* Chandra Mohan, Ananda Vel Murugan
> *Cc:* user
> *Subject:* Re: Partitioned Parquet based external table
>
>
>
> You must add the partitions to the Hive table with something like "alter
> t
Subject: Re: Partitioned Parquet based external table
You must add the partitions to the Hive table with something like "alter table
your_table add if not exists partition (country='us');".
If you have dynamic partitioning turned on, you can do 'msck repair table
y
You must add the partitions to the Hive table with something like "alter table
your_table add if not exists partition (country='us');".
If you have dynamic partitioning turned on, you can do 'msck repair table
your_table' to recover the partitions.
I would recommend reviewing the Hive document
Hi,
I am using Spark 1.5.1.
https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java.
I have slightly modified this example to create partitioned parquet file
Instead of this line
schemaPeople.write().parquet("people.parquet");
I use t