Hi ,

I am using spark streaming to write the aggregated output as parquet files
to the hdfs using SaveMode.Append. I have an external table created like :


CREATE TABLE if not exists rolluptable
USING org.apache.spark.sql.parquet
OPTIONS (
  path "hdfs:////"
);

I had an impression that in case of external table the queries should fetch
the data from newly parquet added files also. But, seems like the newly
written files are not being picked up.

Dropping and recreating the table every time works fine but not a solution.


Please suggest how can my table have the data from newer files also.



Thanks !!
Abhi

Reply via email to