We are loading parquet data as temp tables but wondering if there is a way
to add a partition to the data without going through hive (we still want to
use spark's parquet serde as compared to hive). The data looks like -
/date1/file1, /date1/file2 ... , /date2/file1,
/date2/file2,/daten/filem
Ah awesome, thanks!!
On Thu, Nov 20, 2014 at 3:01 PM, Michael Armbrust mich...@databricks.com
wrote:
In 1.2 by default we use Spark parquet support instead of Hive when the
SerDe contains the word Parquet. This should work with hive partitioning.
On Thu, Nov 20, 2014 at 10:33 AM, Sadhan