Hi,
it runs fine in a single VM, but if I build the uber-jar (contains the
application and all required ignite dependencies) and submit it to the
flink cluster (with master and worker nodes each in different VM's) the
cache configuration isn't loaded:
2018-06-01 22:05:30,665 INFO org.apache.ignite.internal.IgniteKernal
- Config URL: n/a
It seems that both fields in IgniteSink:
/** Ignite grid configuration file. */
private static String igniteCfgFile;
/** Cache name. */
private static String cacheName;
are null.
Maybe the Sink isn't initialized correctly after serialization from the
master to the worker node? I'm not familiar with the Kryo Serialization
used by Apache Flink, but maybe static fields are ignored?
If I create my own copy of IgniteSink with hardcoded values
private static class Holder {
private static final Ignite IGNITE =
Ignition.start("flink-config.xml");
private static final IgniteDataStreamer STREAMER =
IGNITE.dataStreamer("DemoCache");
}
then the cache configuration is loaded and everything is working as
expected:
2018-06-01 22:28:12,985 INFO org.apache.ignite.internal.IgniteKernal
- Config URL:
/tmp/blobStore-56228c43-2ed8-46d3-bbf1-631c3ef778b3/job_132b1e78c18e363739fede5fded4214b/blob_p-a85926793530ee5cec5e7c9372ddbfc22020a0d0-d4b791f47ff24933e8d6ff8ccc4fa665!/flink-config.xml
- Burt