I have pushed the sample application to  github
<https://github.com/akshaymhetre/SparkJobAsIgniteService>  . Please check it
once.

Also, I am able to get rid of the hang issue with spark.close API call by
adding "igniteInstanceName" property. Not sure if its a right approach
though.
I came up with this solution, while debugging this issue. What I observed is
that during saving dataframe to ignite, it needs Ignite context. It first
checks if the context is already there, if it is exists it uses that context
to save dataframe in ignite and on spark close API call it tries to close
the same context. 
As I am trying to run this spark job as an ignite service, I wanted it to
run continuously. So closing the ignite context was causing this issue. So
to make dataframe APIs to create new context everytime, I added
"igniteInstanceName" property to config which I am apsing to new ignite DF
APIs.

Though it resolves the hang issue it is still showing some socket connection
and unmarshalling exceptions. Do I need to worry about it? How can I get rid
of those?  

Also, Any trade-offs if we use Spark As Ignite Service when executed with
Yarn?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to