Found these code on IgniteContext.scala
        // Start ignite server node on each worker in server mode.
        sparkContext.parallelize(1 to workers, workers).foreachPartition(it
⇒ ignite())

Looks like starting server node on each executor is expected.
But I want to ingest data to an existing Ignite cluster using a spark job,
when the spark job finishes the executor will be revoked.
If server node is launched on each executor the data stored on the server
node will be lost.
Please advise me how I can accomplish my requirement.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to