Hi all,
I meet below error when I cache a partitioned Parquet table. It seems that,
Spark is trying to extract the partitioned key in the Parquet file, so it
is not found. But other query could run successfully, even request the
partitioned key. Is it a bug in SparkSQL? Is there any workaround
Hi Xu-dong,
Thats probably because your table's partition path don't look like
hdfs://somepath/key=value/*.parquet. Spark is trying to extract the
partition key's value from the path while caching and hence the exception
is being thrown since it can't find one.
On Mon, Jan 26, 2015 at 10:45 AM,