>
> Is it possible to add a new partition to a persistent table using Spark
> SQL ? The following call works and data gets written in the correct
> directories, but no partition metadata is not added to the Hive metastore.
>
I believe if you use Hive's dynamic partitioned insert syntax then we will
fall back on metastore and do the update.

> In addition I see nothing preventing any arbitrary schema being appended
> to the existing table.
>
This is perhaps kind of a feature, we do automatic schema discovery and
merging when loading a new parquet table.

> Does SparkSQL not need partition metadata when reading data back?
>
No, we dynamically discover it in a distributed job when the table is
loaded.

Reply via email to