Hi guys,

My code is failing with this error

java.lang.Exception: S2V: FATAL ERROR for job S2V_job9197956021769393773.
Job status information is available in the Vertica table
test.S2V_JOB_STATUS_USER_NGOYAL.  Unable to save intermediate orc files to
HDFS path:hdfs://hadoop-dw2-nn.smf1.com/tmp/S2V_job9197956021769393773.
*Error message:org.apache.spark.sql.AnalysisException: The ORC data source
must be used with Hive support enabled;*

This is how I am writing it. I followed steps from this link
<https://www.vertica.com/blog/integrating-apache-spark/>.

dataFrame
  .write
  .format("com.vertica.spark.datasource.DefaultSource")
  .options(connectionProperties)
  .mode(SaveMode.Append)
  .save()

Does anybody have any idea how to fix this?


Thanks
Nikhil

Reply via email to