It is not Spark SQL that throws the error. It is the underlying Database or
layer that throws the error.
Spark acts as an ETL tool. What is the underlying DB where the table
resides? Is concurrency supported. Please send the error to this list
HTH
Mich Talebzadeh,
Solutions Architect/Engineeri
Hello,
I'm building an application on Spark SQL. The cluster is set up in
standalone mode with HDFS as storage. The only Spark application running is
the Spark Thrift Server using FAIR scheduling mode. Queries are submitted
to Thrift Server using beeline.
I have multiple queries that insert rows