Just as a follow up to this: IgniteConfiguration has properties e.g. CountdownLatch which aren't serializable and HDFS Path is not serializable either so there's no clean way to construct a IgniteContext without breaking backwards compatibility.
Maybe update this page: https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd To mention that you can't use an IgniteContext within any Spark map, filter etc. So code such as this e.g. Using BinaryObjects will not work. val pairRdd = rdd.map(x => { val builder = igniteConext.ignite.binary.builder("DT1") builder.setField("id", x.toString) builder.setField("name", "test-" + x.toString) val binObj = builder.build binObj }).zipWithIndex.map(r => (r._2, r._1)) Shame that Ignite doesn’t really work on Spark. Hopefully one day ! From: Naden Franciscus <[email protected]<mailto:[email protected]>> Date: Monday, 2 May 2016 at 4:32 PM To: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>> Subject: Spark/YARN Hey guys, So it looks like using Spark/Ignite on YARN together simply doesn't work. Many of us have Hadoop appliances where we aren't allowed to install anything on the nodes. So the only option is YARN which barring a few bugs seems to work okay. But the IgniteContext within Spark doesn't allow you to read configuration files from YARN. So since you allow users to pass in an IgniteConfiguration we have tried to manually set configuration on the POJOs: https://github.com/apache/ignite/blob/master/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteContext.scala But during any Spark distributed operation it will attempt to serialise this which is not possible since most of the classes contained within IgniteConfiguration e.g. TcpDiscoverySpi are not serializable. I am going to go through and see how many classes will need to be marked serializable (could be dozens) but a call will need to be made: 1. Mark everything within IgniteConfiguration as serializable. 2. Force ALL users of IgniteContext to either read config from HDFS or from a Local Filesystem. Both will go through Spring layer. What's the best way to get a decision on this ? Cheers, Naden
