One problem is that peer class loading did not match between the server and
the client (being started inside the Spark container). That required a
change in the server start up configuration because the spark client starts
with a default of "true". It was a needle in the haystack that I found
this.
After setting peer class loading to true, I now encounter the following
stack trace executing this line of code:
df1.write.format("jdbc").option("url", igniteUrl).option("dbtable",
igniteTable).mode(SaveMode.Append).save()
The "error" is clear enough, but when will batch updates be supported in
Ignite ?
If Spark DataFrame.write doesn't work with Ignite, then can you please
suggested a preferred way to write to Ignite from data inside a Spark
DataFrame ?
Are either of the two items mentioned here (peer class loading with Spark
and DF.write not working with Ignite) documented somewhere "close by" the
Spark discussions in the Ignite documentation?
That would have saved me a lot of time.
17/07/19 19:31:54 WARN JdbcUtils: Requested isolation level 1, but
transactions are unsupported
17/07/19 19:31:54 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.sql.SQLFeatureNotSupportedException: Batch statements are not supported
yet.
at
org.apache.ignite.internal.jdbc2.JdbcPreparedStatement.addBatch(JdbcPreparedStatement.java:218)
at
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:589)
at
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670)
at
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Inserting-Data-From-Spark-to-Ignite-tp14937p15148.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.