Hello,

I have the following Spark SQL query:

SELECT column_name, * from table_name;

I have multiple spark clusters, this query has been running daily on all of
the clusters.  After a recent redeployment, it fails on just one of the
clusters with the following exception:
AnalysisException: cannot resolve '`column_name`' given input columns

The columns listed after the exception does include the "column_name".  The
data and schema is the same on all clusters.  The query does seem to work
using beeline and has been failing when executed through zeppelin.   I'd
appreciate any insight you can provide.

Thanks,

Josh

Reply via email to